00:00:00.006 Started by upstream project "autotest-per-patch" build number 132335 00:00:00.006 originally caused by: 00:00:00.006 Started by user sys_sgci 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:06.908 The recommended git tool is: git 00:00:06.908 using credential 00000000-0000-0000-0000-000000000002 00:00:06.911 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:06.924 Fetching changes from the remote Git repository 00:00:06.928 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.941 Using shallow fetch with depth 1 00:00:06.941 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.941 > git --version # timeout=10 00:00:06.957 > git --version # 'git version 2.39.2' 00:00:06.957 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.970 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.970 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:13.738 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:13.752 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:13.764 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:13.764 > git config core.sparsecheckout # timeout=10 00:00:13.777 > git read-tree -mu HEAD # timeout=10 00:00:13.796 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:13.823 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:13.823 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:13.947 [Pipeline] Start of Pipeline 00:00:13.960 [Pipeline] library 00:00:13.961 Loading library shm_lib@master 00:00:13.961 Library shm_lib@master is cached. Copying from home. 00:00:13.978 [Pipeline] node 00:00:28.980 Still waiting to schedule task 00:00:28.980 Waiting for next available executor on ‘vagrant-vm-host’ 00:11:01.993 Running on VM-host-SM38 in /var/jenkins/workspace/raid-vg-autotest 00:11:01.994 [Pipeline] { 00:11:02.009 [Pipeline] catchError 00:11:02.012 [Pipeline] { 00:11:02.028 [Pipeline] wrap 00:11:02.038 [Pipeline] { 00:11:02.046 [Pipeline] stage 00:11:02.050 [Pipeline] { (Prologue) 00:11:02.077 [Pipeline] echo 00:11:02.078 Node: VM-host-SM38 00:11:02.086 [Pipeline] cleanWs 00:11:02.097 [WS-CLEANUP] Deleting project workspace... 00:11:02.097 [WS-CLEANUP] Deferred wipeout is used... 00:11:02.105 [WS-CLEANUP] done 00:11:02.676 [Pipeline] setCustomBuildProperty 00:11:02.809 [Pipeline] httpRequest 00:11:03.130 [Pipeline] echo 00:11:03.132 Sorcerer 10.211.164.20 is alive 00:11:03.143 [Pipeline] retry 00:11:03.146 [Pipeline] { 00:11:03.159 [Pipeline] httpRequest 00:11:03.164 HttpMethod: GET 00:11:03.164 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:11:03.165 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:11:03.166 Response Code: HTTP/1.1 200 OK 00:11:03.167 Success: Status code 200 is in the accepted range: 200,404 00:11:03.167 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:11:03.313 [Pipeline] } 00:11:03.334 [Pipeline] // retry 00:11:03.343 [Pipeline] sh 00:11:03.631 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:11:03.651 [Pipeline] httpRequest 00:11:03.951 [Pipeline] echo 00:11:03.954 Sorcerer 10.211.164.20 is alive 00:11:03.965 [Pipeline] retry 00:11:03.969 [Pipeline] { 00:11:03.986 [Pipeline] httpRequest 00:11:03.991 HttpMethod: GET 00:11:03.991 URL: http://10.211.164.20/packages/spdk_95f6a056ecf4b8cae15fa0d46b90a394eb041775.tar.gz 00:11:03.992 Sending request to url: http://10.211.164.20/packages/spdk_95f6a056ecf4b8cae15fa0d46b90a394eb041775.tar.gz 00:11:03.993 Response Code: HTTP/1.1 200 OK 00:11:03.994 Success: Status code 200 is in the accepted range: 200,404 00:11:03.995 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_95f6a056ecf4b8cae15fa0d46b90a394eb041775.tar.gz 00:11:06.277 [Pipeline] } 00:11:06.296 [Pipeline] // retry 00:11:06.304 [Pipeline] sh 00:11:06.588 + tar --no-same-owner -xf spdk_95f6a056ecf4b8cae15fa0d46b90a394eb041775.tar.gz 00:11:09.907 [Pipeline] sh 00:11:10.191 + git -C spdk log --oneline -n5 00:11:10.191 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:11:10.191 a38267915 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:11:10.191 095307e93 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:11:10.191 3b3a1a596 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:11:10.191 17c638de0 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:11:10.211 [Pipeline] writeFile 00:11:10.225 [Pipeline] sh 00:11:10.510 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:11:10.524 [Pipeline] sh 00:11:10.811 + cat autorun-spdk.conf 00:11:10.811 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:10.811 SPDK_RUN_ASAN=1 00:11:10.811 SPDK_RUN_UBSAN=1 00:11:10.811 SPDK_TEST_RAID=1 00:11:10.811 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:10.819 RUN_NIGHTLY=0 00:11:10.821 [Pipeline] } 00:11:10.834 [Pipeline] // stage 00:11:10.848 [Pipeline] stage 00:11:10.850 [Pipeline] { (Run VM) 00:11:10.863 [Pipeline] sh 00:11:11.150 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:11:11.150 + echo 'Start stage prepare_nvme.sh' 00:11:11.150 Start stage prepare_nvme.sh 00:11:11.150 + [[ -n 8 ]] 00:11:11.150 + disk_prefix=ex8 00:11:11.150 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:11:11.150 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:11:11.150 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:11:11.150 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:11.150 ++ SPDK_RUN_ASAN=1 00:11:11.150 ++ SPDK_RUN_UBSAN=1 00:11:11.150 ++ SPDK_TEST_RAID=1 00:11:11.150 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:11.150 ++ RUN_NIGHTLY=0 00:11:11.150 + cd /var/jenkins/workspace/raid-vg-autotest 00:11:11.150 + nvme_files=() 00:11:11.150 + declare -A nvme_files 00:11:11.150 + backend_dir=/var/lib/libvirt/images/backends 00:11:11.150 + nvme_files['nvme.img']=5G 00:11:11.150 + nvme_files['nvme-cmb.img']=5G 00:11:11.150 + nvme_files['nvme-multi0.img']=4G 00:11:11.150 + nvme_files['nvme-multi1.img']=4G 00:11:11.150 + nvme_files['nvme-multi2.img']=4G 00:11:11.150 + nvme_files['nvme-openstack.img']=8G 00:11:11.150 + nvme_files['nvme-zns.img']=5G 00:11:11.150 + (( SPDK_TEST_NVME_PMR == 1 )) 00:11:11.150 + (( SPDK_TEST_FTL == 1 )) 00:11:11.150 + (( SPDK_TEST_NVME_FDP == 1 )) 00:11:11.150 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:11:11.150 + for nvme in "${!nvme_files[@]}" 00:11:11.150 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:11:11.150 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:11:11.150 + for nvme in "${!nvme_files[@]}" 00:11:11.150 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:11:11.150 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:11:11.150 + for nvme in "${!nvme_files[@]}" 00:11:11.150 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:11:11.150 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:11:11.150 + for nvme in "${!nvme_files[@]}" 00:11:11.150 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:11:11.150 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:11:11.150 + for nvme in "${!nvme_files[@]}" 00:11:11.150 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:11:11.411 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:11:11.411 + for nvme in "${!nvme_files[@]}" 00:11:11.411 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:11:11.411 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:11:11.411 + for nvme in "${!nvme_files[@]}" 00:11:11.411 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:11:11.411 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:11:11.411 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:11:11.411 + echo 'End stage prepare_nvme.sh' 00:11:11.411 End stage prepare_nvme.sh 00:11:11.425 [Pipeline] sh 00:11:11.710 + DISTRO=fedora39 00:11:11.710 + CPUS=10 00:11:11.710 + RAM=12288 00:11:11.710 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:11:11.710 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -H -a -v -f fedora39 00:11:11.710 00:11:11.710 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:11:11.710 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:11:11.710 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:11:11.710 HELP=0 00:11:11.710 DRY_RUN=0 00:11:11.710 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img, 00:11:11.710 NVME_DISKS_TYPE=nvme,nvme, 00:11:11.710 NVME_AUTO_CREATE=0 00:11:11.710 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img, 00:11:11.710 NVME_CMB=,, 00:11:11.710 NVME_PMR=,, 00:11:11.710 NVME_ZNS=,, 00:11:11.710 NVME_MS=,, 00:11:11.710 NVME_FDP=,, 00:11:11.710 SPDK_VAGRANT_DISTRO=fedora39 00:11:11.710 SPDK_VAGRANT_VMCPU=10 00:11:11.710 SPDK_VAGRANT_VMRAM=12288 00:11:11.710 SPDK_VAGRANT_PROVIDER=libvirt 00:11:11.710 SPDK_VAGRANT_HTTP_PROXY= 00:11:11.710 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:11:11.710 SPDK_OPENSTACK_NETWORK=0 00:11:11.710 VAGRANT_PACKAGE_BOX=0 00:11:11.710 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:11:11.710 FORCE_DISTRO=true 00:11:11.710 VAGRANT_BOX_VERSION= 00:11:11.710 EXTRA_VAGRANTFILES= 00:11:11.710 NIC_MODEL=e1000 00:11:11.710 00:11:11.710 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:11:11.710 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:11:14.301 Bringing machine 'default' up with 'libvirt' provider... 00:11:15.246 ==> default: Creating image (snapshot of base box volume). 00:11:15.508 ==> default: Creating domain with the following settings... 00:11:15.508 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732080106_4adc19ec3d0d005e557e 00:11:15.508 ==> default: -- Domain type: kvm 00:11:15.508 ==> default: -- Cpus: 10 00:11:15.508 ==> default: -- Feature: acpi 00:11:15.508 ==> default: -- Feature: apic 00:11:15.508 ==> default: -- Feature: pae 00:11:15.508 ==> default: -- Memory: 12288M 00:11:15.508 ==> default: -- Memory Backing: hugepages: 00:11:15.508 ==> default: -- Management MAC: 00:11:15.508 ==> default: -- Loader: 00:11:15.508 ==> default: -- Nvram: 00:11:15.508 ==> default: -- Base box: spdk/fedora39 00:11:15.508 ==> default: -- Storage pool: default 00:11:15.508 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732080106_4adc19ec3d0d005e557e.img (20G) 00:11:15.508 ==> default: -- Volume Cache: default 00:11:15.508 ==> default: -- Kernel: 00:11:15.508 ==> default: -- Initrd: 00:11:15.508 ==> default: -- Graphics Type: vnc 00:11:15.508 ==> default: -- Graphics Port: -1 00:11:15.508 ==> default: -- Graphics IP: 127.0.0.1 00:11:15.508 ==> default: -- Graphics Password: Not defined 00:11:15.508 ==> default: -- Video Type: cirrus 00:11:15.508 ==> default: -- Video VRAM: 9216 00:11:15.508 ==> default: -- Sound Type: 00:11:15.508 ==> default: -- Keymap: en-us 00:11:15.508 ==> default: -- TPM Path: 00:11:15.508 ==> default: -- INPUT: type=mouse, bus=ps2 00:11:15.508 ==> default: -- Command line args: 00:11:15.508 ==> default: -> value=-device, 00:11:15.508 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:11:15.508 ==> default: -> value=-drive, 00:11:15.508 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:11:15.508 ==> default: -> value=-device, 00:11:15.508 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:15.508 ==> default: -> value=-device, 00:11:15.508 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:11:15.508 ==> default: -> value=-drive, 00:11:15.508 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:11:15.508 ==> default: -> value=-device, 00:11:15.508 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:15.508 ==> default: -> value=-drive, 00:11:15.508 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:11:15.508 ==> default: -> value=-device, 00:11:15.508 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:15.508 ==> default: -> value=-drive, 00:11:15.508 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:11:15.508 ==> default: -> value=-device, 00:11:15.508 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:16.083 ==> default: Creating shared folders metadata... 00:11:16.083 ==> default: Starting domain. 00:11:18.004 ==> default: Waiting for domain to get an IP address... 00:11:36.120 ==> default: Waiting for SSH to become available... 00:11:36.120 ==> default: Configuring and enabling network interfaces... 00:11:39.422 default: SSH address: 192.168.121.111:22 00:11:39.422 default: SSH username: vagrant 00:11:39.422 default: SSH auth method: private key 00:11:41.410 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:11:48.080 ==> default: Mounting SSHFS shared folder... 00:11:50.010 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:11:50.010 ==> default: Checking Mount.. 00:11:50.949 ==> default: Folder Successfully Mounted! 00:11:50.949 00:11:50.949 SUCCESS! 00:11:50.949 00:11:50.949 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:11:50.949 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:11:50.949 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:11:50.949 00:11:50.960 [Pipeline] } 00:11:50.976 [Pipeline] // stage 00:11:50.986 [Pipeline] dir 00:11:50.987 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:11:50.988 [Pipeline] { 00:11:51.002 [Pipeline] catchError 00:11:51.005 [Pipeline] { 00:11:51.017 [Pipeline] sh 00:11:51.364 + vagrant ssh-config --host vagrant 00:11:51.364 + sed -ne '/^Host/,$p' 00:11:51.364 + tee ssh_conf 00:11:53.909 Host vagrant 00:11:53.909 HostName 192.168.121.111 00:11:53.909 User vagrant 00:11:53.909 Port 22 00:11:53.909 UserKnownHostsFile /dev/null 00:11:53.909 StrictHostKeyChecking no 00:11:53.909 PasswordAuthentication no 00:11:53.909 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:11:53.909 IdentitiesOnly yes 00:11:53.909 LogLevel FATAL 00:11:53.909 ForwardAgent yes 00:11:53.909 ForwardX11 yes 00:11:53.909 00:11:53.926 [Pipeline] withEnv 00:11:53.929 [Pipeline] { 00:11:53.943 [Pipeline] sh 00:11:54.228 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:11:54.228 source /etc/os-release 00:11:54.228 [[ -e /image.version ]] && img=$(< /image.version) 00:11:54.228 # Minimal, systemd-like check. 00:11:54.228 if [[ -e /.dockerenv ]]; then 00:11:54.228 # Clear garbage from the node'\''s name: 00:11:54.228 # agt-er_autotest_547-896 -> autotest_547-896 00:11:54.228 # $HOSTNAME is the actual container id 00:11:54.228 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:11:54.228 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:11:54.229 # We can assume this is a mount from a host where container is running, 00:11:54.229 # so fetch its hostname to easily identify the target swarm worker. 00:11:54.229 container="$(< /etc/hostname) ($agent)" 00:11:54.229 else 00:11:54.229 # Fallback 00:11:54.229 container=$agent 00:11:54.229 fi 00:11:54.229 fi 00:11:54.229 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:11:54.229 ' 00:11:54.241 [Pipeline] } 00:11:54.261 [Pipeline] // withEnv 00:11:54.270 [Pipeline] setCustomBuildProperty 00:11:54.284 [Pipeline] stage 00:11:54.287 [Pipeline] { (Tests) 00:11:54.305 [Pipeline] sh 00:11:54.637 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:11:54.911 [Pipeline] sh 00:11:55.193 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:11:55.209 [Pipeline] timeout 00:11:55.210 Timeout set to expire in 1 hr 30 min 00:11:55.212 [Pipeline] { 00:11:55.226 [Pipeline] sh 00:11:55.507 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:11:55.789 HEAD is now at 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:11:55.802 [Pipeline] sh 00:11:56.087 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:11:56.359 [Pipeline] sh 00:11:56.638 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:11:56.914 [Pipeline] sh 00:11:57.248 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo' 00:11:57.248 ++ readlink -f spdk_repo 00:11:57.248 + DIR_ROOT=/home/vagrant/spdk_repo 00:11:57.248 + [[ -n /home/vagrant/spdk_repo ]] 00:11:57.248 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:11:57.248 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:11:57.248 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:11:57.248 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:11:57.248 + [[ -d /home/vagrant/spdk_repo/output ]] 00:11:57.248 + [[ raid-vg-autotest == pkgdep-* ]] 00:11:57.248 + cd /home/vagrant/spdk_repo 00:11:57.248 + source /etc/os-release 00:11:57.248 ++ NAME='Fedora Linux' 00:11:57.248 ++ VERSION='39 (Cloud Edition)' 00:11:57.248 ++ ID=fedora 00:11:57.248 ++ VERSION_ID=39 00:11:57.248 ++ VERSION_CODENAME= 00:11:57.248 ++ PLATFORM_ID=platform:f39 00:11:57.248 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:11:57.248 ++ ANSI_COLOR='0;38;2;60;110;180' 00:11:57.248 ++ LOGO=fedora-logo-icon 00:11:57.248 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:11:57.248 ++ HOME_URL=https://fedoraproject.org/ 00:11:57.248 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:11:57.248 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:11:57.248 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:11:57.248 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:11:57.248 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:11:57.248 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:11:57.248 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:11:57.248 ++ SUPPORT_END=2024-11-12 00:11:57.248 ++ VARIANT='Cloud Edition' 00:11:57.248 ++ VARIANT_ID=cloud 00:11:57.248 + uname -a 00:11:57.248 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:11:57.248 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:57.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:57.818 Hugepages 00:11:57.818 node hugesize free / total 00:11:57.818 node0 1048576kB 0 / 0 00:11:57.818 node0 2048kB 0 / 0 00:11:57.818 00:11:57.818 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:57.818 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:57.818 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:11:57.818 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:11:57.818 + rm -f /tmp/spdk-ld-path 00:11:57.818 + source autorun-spdk.conf 00:11:57.818 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:57.818 ++ SPDK_RUN_ASAN=1 00:11:57.818 ++ SPDK_RUN_UBSAN=1 00:11:57.818 ++ SPDK_TEST_RAID=1 00:11:57.818 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:57.818 ++ RUN_NIGHTLY=0 00:11:57.818 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:11:57.818 + [[ -n '' ]] 00:11:57.818 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:11:57.818 + for M in /var/spdk/build-*-manifest.txt 00:11:57.818 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:11:57.818 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:57.818 + for M in /var/spdk/build-*-manifest.txt 00:11:57.818 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:11:57.818 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:57.818 + for M in /var/spdk/build-*-manifest.txt 00:11:57.818 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:11:57.818 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:57.818 ++ uname 00:11:57.818 + [[ Linux == \L\i\n\u\x ]] 00:11:57.818 + sudo dmesg -T 00:11:57.818 + sudo dmesg --clear 00:11:57.818 + dmesg_pid=5000 00:11:57.818 + [[ Fedora Linux == FreeBSD ]] 00:11:57.818 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:57.818 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:57.818 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:11:57.818 + [[ -x /usr/src/fio-static/fio ]] 00:11:57.818 + sudo dmesg -Tw 00:11:57.818 + export FIO_BIN=/usr/src/fio-static/fio 00:11:57.818 + FIO_BIN=/usr/src/fio-static/fio 00:11:57.818 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:11:57.818 + [[ ! -v VFIO_QEMU_BIN ]] 00:11:57.818 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:11:57.818 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:57.818 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:57.818 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:11:57.818 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:57.818 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:57.818 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:11:57.818 05:22:29 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:11:57.818 05:22:29 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:11:57.818 05:22:29 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:57.818 05:22:29 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:11:57.818 05:22:29 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:11:57.818 05:22:29 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:11:57.818 05:22:29 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:57.818 05:22:29 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:11:57.818 05:22:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:11:57.818 05:22:29 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:11:58.079 05:22:29 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:11:58.079 05:22:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.079 05:22:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:11:58.079 05:22:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:11:58.079 05:22:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.079 05:22:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.079 05:22:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.079 05:22:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.079 05:22:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.079 05:22:29 -- paths/export.sh@5 -- $ export PATH 00:11:58.079 05:22:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.079 05:22:29 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:11:58.079 05:22:29 -- common/autobuild_common.sh@486 -- $ date +%s 00:11:58.079 05:22:29 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732080149.XXXXXX 00:11:58.079 05:22:29 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732080149.WM39h9 00:11:58.079 05:22:29 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:11:58.079 05:22:29 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:11:58.079 05:22:29 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:11:58.079 05:22:29 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:11:58.079 05:22:29 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:11:58.079 05:22:29 -- common/autobuild_common.sh@502 -- $ get_config_params 00:11:58.079 05:22:29 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:11:58.079 05:22:29 -- common/autotest_common.sh@10 -- $ set +x 00:11:58.079 05:22:29 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:11:58.080 05:22:29 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:11:58.080 05:22:29 -- pm/common@17 -- $ local monitor 00:11:58.080 05:22:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:58.080 05:22:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:58.080 05:22:29 -- pm/common@25 -- $ sleep 1 00:11:58.080 05:22:29 -- pm/common@21 -- $ date +%s 00:11:58.080 05:22:29 -- pm/common@21 -- $ date +%s 00:11:58.080 05:22:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732080149 00:11:58.080 05:22:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732080149 00:11:58.080 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732080149_collect-cpu-load.pm.log 00:11:58.080 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732080149_collect-vmstat.pm.log 00:11:59.023 05:22:30 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:11:59.023 05:22:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:11:59.023 05:22:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:11:59.023 05:22:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:11:59.023 05:22:30 -- spdk/autobuild.sh@16 -- $ date -u 00:11:59.023 Wed Nov 20 05:22:30 AM UTC 2024 00:11:59.023 05:22:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:11:59.023 v25.01-pre-189-g95f6a056e 00:11:59.023 05:22:30 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:11:59.023 05:22:30 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:11:59.023 05:22:30 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:11:59.023 05:22:30 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:11:59.023 05:22:30 -- common/autotest_common.sh@10 -- $ set +x 00:11:59.023 ************************************ 00:11:59.023 START TEST asan 00:11:59.023 ************************************ 00:11:59.023 using asan 00:11:59.023 05:22:30 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:11:59.023 00:11:59.023 real 0m0.000s 00:11:59.023 user 0m0.000s 00:11:59.023 sys 0m0.000s 00:11:59.023 05:22:30 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:11:59.023 ************************************ 00:11:59.023 END TEST asan 00:11:59.023 ************************************ 00:11:59.023 05:22:30 asan -- common/autotest_common.sh@10 -- $ set +x 00:11:59.023 05:22:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:11:59.023 05:22:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:11:59.023 05:22:30 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:11:59.023 05:22:30 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:11:59.023 05:22:30 -- common/autotest_common.sh@10 -- $ set +x 00:11:59.023 ************************************ 00:11:59.023 START TEST ubsan 00:11:59.023 ************************************ 00:11:59.023 using ubsan 00:11:59.024 05:22:30 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:11:59.024 00:11:59.024 real 0m0.000s 00:11:59.024 user 0m0.000s 00:11:59.024 sys 0m0.000s 00:11:59.024 05:22:30 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:11:59.024 ************************************ 00:11:59.024 END TEST ubsan 00:11:59.024 ************************************ 00:11:59.024 05:22:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:11:59.024 05:22:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:11:59.024 05:22:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:11:59.024 05:22:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:11:59.024 05:22:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:11:59.024 05:22:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:11:59.024 05:22:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:11:59.024 05:22:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:11:59.024 05:22:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:11:59.024 05:22:30 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:11:59.284 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:59.284 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:59.545 Using 'verbs' RDMA provider 00:12:10.568 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:12:20.576 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:12:20.576 Creating mk/config.mk...done. 00:12:20.576 Creating mk/cc.flags.mk...done. 00:12:20.576 Type 'make' to build. 00:12:20.576 05:22:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:12:20.576 05:22:51 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:12:20.576 05:22:51 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:12:20.576 05:22:51 -- common/autotest_common.sh@10 -- $ set +x 00:12:20.576 ************************************ 00:12:20.576 START TEST make 00:12:20.576 ************************************ 00:12:20.576 05:22:51 make -- common/autotest_common.sh@1127 -- $ make -j10 00:12:20.576 make[1]: Nothing to be done for 'all'. 00:12:30.628 The Meson build system 00:12:30.628 Version: 1.5.0 00:12:30.628 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:12:30.628 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:30.628 Build type: native build 00:12:30.628 Program cat found: YES (/usr/bin/cat) 00:12:30.628 Project name: DPDK 00:12:30.628 Project version: 24.03.0 00:12:30.628 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:12:30.628 C linker for the host machine: cc ld.bfd 2.40-14 00:12:30.628 Host machine cpu family: x86_64 00:12:30.628 Host machine cpu: x86_64 00:12:30.628 Message: ## Building in Developer Mode ## 00:12:30.628 Program pkg-config found: YES (/usr/bin/pkg-config) 00:12:30.628 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:12:30.628 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:12:30.628 Program python3 found: YES (/usr/bin/python3) 00:12:30.628 Program cat found: YES (/usr/bin/cat) 00:12:30.628 Compiler for C supports arguments -march=native: YES 00:12:30.628 Checking for size of "void *" : 8 00:12:30.628 Checking for size of "void *" : 8 (cached) 00:12:30.628 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:12:30.628 Library m found: YES 00:12:30.628 Library numa found: YES 00:12:30.628 Has header "numaif.h" : YES 00:12:30.628 Library fdt found: NO 00:12:30.628 Library execinfo found: NO 00:12:30.628 Has header "execinfo.h" : YES 00:12:30.628 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:12:30.628 Run-time dependency libarchive found: NO (tried pkgconfig) 00:12:30.628 Run-time dependency libbsd found: NO (tried pkgconfig) 00:12:30.628 Run-time dependency jansson found: NO (tried pkgconfig) 00:12:30.628 Run-time dependency openssl found: YES 3.1.1 00:12:30.628 Run-time dependency libpcap found: YES 1.10.4 00:12:30.628 Has header "pcap.h" with dependency libpcap: YES 00:12:30.628 Compiler for C supports arguments -Wcast-qual: YES 00:12:30.628 Compiler for C supports arguments -Wdeprecated: YES 00:12:30.628 Compiler for C supports arguments -Wformat: YES 00:12:30.628 Compiler for C supports arguments -Wformat-nonliteral: NO 00:12:30.628 Compiler for C supports arguments -Wformat-security: NO 00:12:30.628 Compiler for C supports arguments -Wmissing-declarations: YES 00:12:30.628 Compiler for C supports arguments -Wmissing-prototypes: YES 00:12:30.628 Compiler for C supports arguments -Wnested-externs: YES 00:12:30.628 Compiler for C supports arguments -Wold-style-definition: YES 00:12:30.628 Compiler for C supports arguments -Wpointer-arith: YES 00:12:30.628 Compiler for C supports arguments -Wsign-compare: YES 00:12:30.628 Compiler for C supports arguments -Wstrict-prototypes: YES 00:12:30.628 Compiler for C supports arguments -Wundef: YES 00:12:30.628 Compiler for C supports arguments -Wwrite-strings: YES 00:12:30.628 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:12:30.628 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:12:30.628 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:12:30.628 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:12:30.628 Program objdump found: YES (/usr/bin/objdump) 00:12:30.628 Compiler for C supports arguments -mavx512f: YES 00:12:30.628 Checking if "AVX512 checking" compiles: YES 00:12:30.628 Fetching value of define "__SSE4_2__" : 1 00:12:30.628 Fetching value of define "__AES__" : 1 00:12:30.628 Fetching value of define "__AVX__" : 1 00:12:30.628 Fetching value of define "__AVX2__" : 1 00:12:30.628 Fetching value of define "__AVX512BW__" : 1 00:12:30.628 Fetching value of define "__AVX512CD__" : 1 00:12:30.628 Fetching value of define "__AVX512DQ__" : 1 00:12:30.628 Fetching value of define "__AVX512F__" : 1 00:12:30.628 Fetching value of define "__AVX512VL__" : 1 00:12:30.628 Fetching value of define "__PCLMUL__" : 1 00:12:30.628 Fetching value of define "__RDRND__" : 1 00:12:30.628 Fetching value of define "__RDSEED__" : 1 00:12:30.628 Fetching value of define "__VPCLMULQDQ__" : 1 00:12:30.628 Fetching value of define "__znver1__" : (undefined) 00:12:30.628 Fetching value of define "__znver2__" : (undefined) 00:12:30.628 Fetching value of define "__znver3__" : (undefined) 00:12:30.628 Fetching value of define "__znver4__" : (undefined) 00:12:30.628 Library asan found: YES 00:12:30.628 Compiler for C supports arguments -Wno-format-truncation: YES 00:12:30.628 Message: lib/log: Defining dependency "log" 00:12:30.628 Message: lib/kvargs: Defining dependency "kvargs" 00:12:30.628 Message: lib/telemetry: Defining dependency "telemetry" 00:12:30.629 Library rt found: YES 00:12:30.629 Checking for function "getentropy" : NO 00:12:30.629 Message: lib/eal: Defining dependency "eal" 00:12:30.629 Message: lib/ring: Defining dependency "ring" 00:12:30.629 Message: lib/rcu: Defining dependency "rcu" 00:12:30.629 Message: lib/mempool: Defining dependency "mempool" 00:12:30.629 Message: lib/mbuf: Defining dependency "mbuf" 00:12:30.629 Fetching value of define "__PCLMUL__" : 1 (cached) 00:12:30.629 Fetching value of define "__AVX512F__" : 1 (cached) 00:12:30.629 Fetching value of define "__AVX512BW__" : 1 (cached) 00:12:30.629 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:12:30.629 Fetching value of define "__AVX512VL__" : 1 (cached) 00:12:30.629 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:12:30.629 Compiler for C supports arguments -mpclmul: YES 00:12:30.629 Compiler for C supports arguments -maes: YES 00:12:30.629 Compiler for C supports arguments -mavx512f: YES (cached) 00:12:30.629 Compiler for C supports arguments -mavx512bw: YES 00:12:30.629 Compiler for C supports arguments -mavx512dq: YES 00:12:30.629 Compiler for C supports arguments -mavx512vl: YES 00:12:30.629 Compiler for C supports arguments -mvpclmulqdq: YES 00:12:30.629 Compiler for C supports arguments -mavx2: YES 00:12:30.629 Compiler for C supports arguments -mavx: YES 00:12:30.629 Message: lib/net: Defining dependency "net" 00:12:30.629 Message: lib/meter: Defining dependency "meter" 00:12:30.629 Message: lib/ethdev: Defining dependency "ethdev" 00:12:30.629 Message: lib/pci: Defining dependency "pci" 00:12:30.629 Message: lib/cmdline: Defining dependency "cmdline" 00:12:30.629 Message: lib/hash: Defining dependency "hash" 00:12:30.629 Message: lib/timer: Defining dependency "timer" 00:12:30.629 Message: lib/compressdev: Defining dependency "compressdev" 00:12:30.629 Message: lib/cryptodev: Defining dependency "cryptodev" 00:12:30.629 Message: lib/dmadev: Defining dependency "dmadev" 00:12:30.629 Compiler for C supports arguments -Wno-cast-qual: YES 00:12:30.629 Message: lib/power: Defining dependency "power" 00:12:30.629 Message: lib/reorder: Defining dependency "reorder" 00:12:30.629 Message: lib/security: Defining dependency "security" 00:12:30.629 Has header "linux/userfaultfd.h" : YES 00:12:30.629 Has header "linux/vduse.h" : YES 00:12:30.629 Message: lib/vhost: Defining dependency "vhost" 00:12:30.629 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:12:30.629 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:12:30.629 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:12:30.629 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:12:30.629 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:12:30.629 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:12:30.629 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:12:30.629 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:12:30.629 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:12:30.629 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:12:30.629 Program doxygen found: YES (/usr/local/bin/doxygen) 00:12:30.629 Configuring doxy-api-html.conf using configuration 00:12:30.629 Configuring doxy-api-man.conf using configuration 00:12:30.629 Program mandb found: YES (/usr/bin/mandb) 00:12:30.629 Program sphinx-build found: NO 00:12:30.629 Configuring rte_build_config.h using configuration 00:12:30.629 Message: 00:12:30.629 ================= 00:12:30.629 Applications Enabled 00:12:30.629 ================= 00:12:30.629 00:12:30.629 apps: 00:12:30.629 00:12:30.629 00:12:30.629 Message: 00:12:30.629 ================= 00:12:30.629 Libraries Enabled 00:12:30.629 ================= 00:12:30.629 00:12:30.629 libs: 00:12:30.629 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:12:30.629 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:12:30.629 cryptodev, dmadev, power, reorder, security, vhost, 00:12:30.629 00:12:30.629 Message: 00:12:30.629 =============== 00:12:30.629 Drivers Enabled 00:12:30.629 =============== 00:12:30.629 00:12:30.629 common: 00:12:30.629 00:12:30.629 bus: 00:12:30.629 pci, vdev, 00:12:30.629 mempool: 00:12:30.629 ring, 00:12:30.629 dma: 00:12:30.629 00:12:30.629 net: 00:12:30.629 00:12:30.629 crypto: 00:12:30.629 00:12:30.629 compress: 00:12:30.629 00:12:30.629 vdpa: 00:12:30.629 00:12:30.629 00:12:30.629 Message: 00:12:30.629 ================= 00:12:30.629 Content Skipped 00:12:30.629 ================= 00:12:30.629 00:12:30.629 apps: 00:12:30.629 dumpcap: explicitly disabled via build config 00:12:30.629 graph: explicitly disabled via build config 00:12:30.629 pdump: explicitly disabled via build config 00:12:30.629 proc-info: explicitly disabled via build config 00:12:30.629 test-acl: explicitly disabled via build config 00:12:30.629 test-bbdev: explicitly disabled via build config 00:12:30.629 test-cmdline: explicitly disabled via build config 00:12:30.629 test-compress-perf: explicitly disabled via build config 00:12:30.629 test-crypto-perf: explicitly disabled via build config 00:12:30.629 test-dma-perf: explicitly disabled via build config 00:12:30.629 test-eventdev: explicitly disabled via build config 00:12:30.629 test-fib: explicitly disabled via build config 00:12:30.629 test-flow-perf: explicitly disabled via build config 00:12:30.629 test-gpudev: explicitly disabled via build config 00:12:30.629 test-mldev: explicitly disabled via build config 00:12:30.629 test-pipeline: explicitly disabled via build config 00:12:30.629 test-pmd: explicitly disabled via build config 00:12:30.629 test-regex: explicitly disabled via build config 00:12:30.629 test-sad: explicitly disabled via build config 00:12:30.629 test-security-perf: explicitly disabled via build config 00:12:30.629 00:12:30.629 libs: 00:12:30.629 argparse: explicitly disabled via build config 00:12:30.629 metrics: explicitly disabled via build config 00:12:30.629 acl: explicitly disabled via build config 00:12:30.629 bbdev: explicitly disabled via build config 00:12:30.629 bitratestats: explicitly disabled via build config 00:12:30.629 bpf: explicitly disabled via build config 00:12:30.629 cfgfile: explicitly disabled via build config 00:12:30.629 distributor: explicitly disabled via build config 00:12:30.629 efd: explicitly disabled via build config 00:12:30.629 eventdev: explicitly disabled via build config 00:12:30.629 dispatcher: explicitly disabled via build config 00:12:30.629 gpudev: explicitly disabled via build config 00:12:30.629 gro: explicitly disabled via build config 00:12:30.629 gso: explicitly disabled via build config 00:12:30.629 ip_frag: explicitly disabled via build config 00:12:30.629 jobstats: explicitly disabled via build config 00:12:30.629 latencystats: explicitly disabled via build config 00:12:30.629 lpm: explicitly disabled via build config 00:12:30.629 member: explicitly disabled via build config 00:12:30.629 pcapng: explicitly disabled via build config 00:12:30.629 rawdev: explicitly disabled via build config 00:12:30.629 regexdev: explicitly disabled via build config 00:12:30.629 mldev: explicitly disabled via build config 00:12:30.629 rib: explicitly disabled via build config 00:12:30.629 sched: explicitly disabled via build config 00:12:30.629 stack: explicitly disabled via build config 00:12:30.630 ipsec: explicitly disabled via build config 00:12:30.630 pdcp: explicitly disabled via build config 00:12:30.630 fib: explicitly disabled via build config 00:12:30.630 port: explicitly disabled via build config 00:12:30.630 pdump: explicitly disabled via build config 00:12:30.630 table: explicitly disabled via build config 00:12:30.630 pipeline: explicitly disabled via build config 00:12:30.630 graph: explicitly disabled via build config 00:12:30.630 node: explicitly disabled via build config 00:12:30.630 00:12:30.630 drivers: 00:12:30.630 common/cpt: not in enabled drivers build config 00:12:30.630 common/dpaax: not in enabled drivers build config 00:12:30.630 common/iavf: not in enabled drivers build config 00:12:30.630 common/idpf: not in enabled drivers build config 00:12:30.630 common/ionic: not in enabled drivers build config 00:12:30.630 common/mvep: not in enabled drivers build config 00:12:30.630 common/octeontx: not in enabled drivers build config 00:12:30.630 bus/auxiliary: not in enabled drivers build config 00:12:30.630 bus/cdx: not in enabled drivers build config 00:12:30.630 bus/dpaa: not in enabled drivers build config 00:12:30.630 bus/fslmc: not in enabled drivers build config 00:12:30.630 bus/ifpga: not in enabled drivers build config 00:12:30.630 bus/platform: not in enabled drivers build config 00:12:30.630 bus/uacce: not in enabled drivers build config 00:12:30.630 bus/vmbus: not in enabled drivers build config 00:12:30.630 common/cnxk: not in enabled drivers build config 00:12:30.630 common/mlx5: not in enabled drivers build config 00:12:30.630 common/nfp: not in enabled drivers build config 00:12:30.630 common/nitrox: not in enabled drivers build config 00:12:30.630 common/qat: not in enabled drivers build config 00:12:30.630 common/sfc_efx: not in enabled drivers build config 00:12:30.630 mempool/bucket: not in enabled drivers build config 00:12:30.630 mempool/cnxk: not in enabled drivers build config 00:12:30.630 mempool/dpaa: not in enabled drivers build config 00:12:30.630 mempool/dpaa2: not in enabled drivers build config 00:12:30.630 mempool/octeontx: not in enabled drivers build config 00:12:30.630 mempool/stack: not in enabled drivers build config 00:12:30.630 dma/cnxk: not in enabled drivers build config 00:12:30.630 dma/dpaa: not in enabled drivers build config 00:12:30.630 dma/dpaa2: not in enabled drivers build config 00:12:30.630 dma/hisilicon: not in enabled drivers build config 00:12:30.630 dma/idxd: not in enabled drivers build config 00:12:30.630 dma/ioat: not in enabled drivers build config 00:12:30.630 dma/skeleton: not in enabled drivers build config 00:12:30.630 net/af_packet: not in enabled drivers build config 00:12:30.630 net/af_xdp: not in enabled drivers build config 00:12:30.630 net/ark: not in enabled drivers build config 00:12:30.630 net/atlantic: not in enabled drivers build config 00:12:30.630 net/avp: not in enabled drivers build config 00:12:30.630 net/axgbe: not in enabled drivers build config 00:12:30.630 net/bnx2x: not in enabled drivers build config 00:12:30.630 net/bnxt: not in enabled drivers build config 00:12:30.630 net/bonding: not in enabled drivers build config 00:12:30.630 net/cnxk: not in enabled drivers build config 00:12:30.630 net/cpfl: not in enabled drivers build config 00:12:30.630 net/cxgbe: not in enabled drivers build config 00:12:30.630 net/dpaa: not in enabled drivers build config 00:12:30.630 net/dpaa2: not in enabled drivers build config 00:12:30.630 net/e1000: not in enabled drivers build config 00:12:30.630 net/ena: not in enabled drivers build config 00:12:30.630 net/enetc: not in enabled drivers build config 00:12:30.630 net/enetfec: not in enabled drivers build config 00:12:30.630 net/enic: not in enabled drivers build config 00:12:30.630 net/failsafe: not in enabled drivers build config 00:12:30.630 net/fm10k: not in enabled drivers build config 00:12:30.630 net/gve: not in enabled drivers build config 00:12:30.630 net/hinic: not in enabled drivers build config 00:12:30.630 net/hns3: not in enabled drivers build config 00:12:30.630 net/i40e: not in enabled drivers build config 00:12:30.630 net/iavf: not in enabled drivers build config 00:12:30.630 net/ice: not in enabled drivers build config 00:12:30.630 net/idpf: not in enabled drivers build config 00:12:30.630 net/igc: not in enabled drivers build config 00:12:30.630 net/ionic: not in enabled drivers build config 00:12:30.630 net/ipn3ke: not in enabled drivers build config 00:12:30.630 net/ixgbe: not in enabled drivers build config 00:12:30.630 net/mana: not in enabled drivers build config 00:12:30.630 net/memif: not in enabled drivers build config 00:12:30.630 net/mlx4: not in enabled drivers build config 00:12:30.630 net/mlx5: not in enabled drivers build config 00:12:30.630 net/mvneta: not in enabled drivers build config 00:12:30.630 net/mvpp2: not in enabled drivers build config 00:12:30.630 net/netvsc: not in enabled drivers build config 00:12:30.630 net/nfb: not in enabled drivers build config 00:12:30.630 net/nfp: not in enabled drivers build config 00:12:30.630 net/ngbe: not in enabled drivers build config 00:12:30.630 net/null: not in enabled drivers build config 00:12:30.630 net/octeontx: not in enabled drivers build config 00:12:30.630 net/octeon_ep: not in enabled drivers build config 00:12:30.630 net/pcap: not in enabled drivers build config 00:12:30.630 net/pfe: not in enabled drivers build config 00:12:30.630 net/qede: not in enabled drivers build config 00:12:30.630 net/ring: not in enabled drivers build config 00:12:30.630 net/sfc: not in enabled drivers build config 00:12:30.630 net/softnic: not in enabled drivers build config 00:12:30.630 net/tap: not in enabled drivers build config 00:12:30.630 net/thunderx: not in enabled drivers build config 00:12:30.630 net/txgbe: not in enabled drivers build config 00:12:30.630 net/vdev_netvsc: not in enabled drivers build config 00:12:30.630 net/vhost: not in enabled drivers build config 00:12:30.630 net/virtio: not in enabled drivers build config 00:12:30.630 net/vmxnet3: not in enabled drivers build config 00:12:30.630 raw/*: missing internal dependency, "rawdev" 00:12:30.630 crypto/armv8: not in enabled drivers build config 00:12:30.630 crypto/bcmfs: not in enabled drivers build config 00:12:30.630 crypto/caam_jr: not in enabled drivers build config 00:12:30.630 crypto/ccp: not in enabled drivers build config 00:12:30.630 crypto/cnxk: not in enabled drivers build config 00:12:30.630 crypto/dpaa_sec: not in enabled drivers build config 00:12:30.630 crypto/dpaa2_sec: not in enabled drivers build config 00:12:30.630 crypto/ipsec_mb: not in enabled drivers build config 00:12:30.630 crypto/mlx5: not in enabled drivers build config 00:12:30.630 crypto/mvsam: not in enabled drivers build config 00:12:30.630 crypto/nitrox: not in enabled drivers build config 00:12:30.630 crypto/null: not in enabled drivers build config 00:12:30.630 crypto/octeontx: not in enabled drivers build config 00:12:30.630 crypto/openssl: not in enabled drivers build config 00:12:30.630 crypto/scheduler: not in enabled drivers build config 00:12:30.630 crypto/uadk: not in enabled drivers build config 00:12:30.630 crypto/virtio: not in enabled drivers build config 00:12:30.630 compress/isal: not in enabled drivers build config 00:12:30.630 compress/mlx5: not in enabled drivers build config 00:12:30.630 compress/nitrox: not in enabled drivers build config 00:12:30.630 compress/octeontx: not in enabled drivers build config 00:12:30.630 compress/zlib: not in enabled drivers build config 00:12:30.630 regex/*: missing internal dependency, "regexdev" 00:12:30.630 ml/*: missing internal dependency, "mldev" 00:12:30.630 vdpa/ifc: not in enabled drivers build config 00:12:30.630 vdpa/mlx5: not in enabled drivers build config 00:12:30.630 vdpa/nfp: not in enabled drivers build config 00:12:30.630 vdpa/sfc: not in enabled drivers build config 00:12:30.630 event/*: missing internal dependency, "eventdev" 00:12:30.630 baseband/*: missing internal dependency, "bbdev" 00:12:30.630 gpu/*: missing internal dependency, "gpudev" 00:12:30.630 00:12:30.630 00:12:30.925 Build targets in project: 84 00:12:30.925 00:12:30.925 DPDK 24.03.0 00:12:30.925 00:12:30.925 User defined options 00:12:30.925 buildtype : debug 00:12:30.925 default_library : shared 00:12:30.925 libdir : lib 00:12:30.925 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:30.925 b_sanitize : address 00:12:30.925 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:12:30.925 c_link_args : 00:12:30.925 cpu_instruction_set: native 00:12:30.925 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:12:30.925 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:12:30.925 enable_docs : false 00:12:30.925 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:12:30.925 enable_kmods : false 00:12:30.925 max_lcores : 128 00:12:30.925 tests : false 00:12:30.925 00:12:30.925 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:12:31.185 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:12:31.445 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:12:31.445 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:12:31.445 [3/267] Linking static target lib/librte_kvargs.a 00:12:31.445 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:12:31.445 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:12:31.445 [6/267] Linking static target lib/librte_log.a 00:12:31.705 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:12:31.705 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:12:31.705 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:12:31.967 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:12:31.967 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:12:31.967 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:12:31.967 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:12:31.967 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:12:31.967 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:12:31.967 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:12:32.226 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:12:32.226 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:12:32.226 [19/267] Linking static target lib/librte_telemetry.a 00:12:32.226 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:12:32.485 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:12:32.485 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:12:32.485 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:12:32.485 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:12:32.485 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:12:32.485 [26/267] Linking target lib/librte_log.so.24.1 00:12:32.746 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:12:32.746 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:12:32.746 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:12:32.746 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:12:33.006 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:12:33.007 [32/267] Linking target lib/librte_kvargs.so.24.1 00:12:33.007 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:12:33.007 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:12:33.007 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:12:33.267 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:12:33.267 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:12:33.267 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:12:33.267 [39/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:12:33.267 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:12:33.267 [41/267] Linking target lib/librte_telemetry.so.24.1 00:12:33.267 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:12:33.267 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:12:33.528 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:12:33.528 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:12:33.528 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:12:33.528 [47/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:12:33.528 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:12:33.528 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:12:33.789 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:12:33.789 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:12:33.789 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:12:33.789 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:12:34.050 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:12:34.050 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:12:34.050 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:12:34.050 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:12:34.050 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:12:34.311 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:12:34.311 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:12:34.311 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:12:34.311 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:12:34.311 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:12:34.572 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:12:34.572 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:12:34.572 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:12:34.572 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:12:34.869 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:12:34.869 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:12:34.869 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:12:34.869 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:12:34.869 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:12:34.869 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:12:34.869 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:12:34.869 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:12:35.129 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:12:35.129 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:12:35.129 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:12:35.391 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:12:35.391 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:12:35.391 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:12:35.391 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:12:35.391 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:12:35.391 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:12:35.391 [85/267] Linking static target lib/librte_ring.a 00:12:35.654 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:12:35.654 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:12:35.654 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:12:35.917 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:12:35.917 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:12:35.917 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:12:35.917 [92/267] Linking static target lib/librte_eal.a 00:12:35.917 [93/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:12:35.917 [94/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:12:35.917 [95/267] Linking static target lib/librte_rcu.a 00:12:36.178 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:12:36.178 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:12:36.178 [98/267] Linking static target lib/librte_mempool.a 00:12:36.178 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:12:36.178 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:12:36.440 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:12:36.440 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:12:36.440 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:12:36.440 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:12:36.440 [105/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:12:36.440 [106/267] Linking static target lib/librte_meter.a 00:12:36.703 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:12:36.703 [108/267] Linking static target lib/librte_mbuf.a 00:12:36.965 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:12:36.965 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:12:36.965 [111/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:12:36.965 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:12:36.965 [113/267] Linking static target lib/librte_net.a 00:12:36.965 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:12:36.965 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:12:37.538 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:12:37.538 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:12:37.538 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:12:37.538 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:12:37.799 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:12:37.799 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:12:37.799 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:12:38.061 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:12:38.061 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:12:38.061 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:12:38.061 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:12:38.061 [127/267] Linking static target lib/librte_pci.a 00:12:38.061 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:12:38.061 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:12:38.322 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:12:38.322 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:12:38.322 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:12:38.322 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:12:38.322 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:12:38.322 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:12:38.322 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:38.322 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:12:38.582 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:12:38.582 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:12:38.582 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:12:38.582 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:12:38.582 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:12:38.582 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:12:38.582 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:12:38.582 [145/267] Linking static target lib/librte_cmdline.a 00:12:38.843 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:12:38.843 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:12:38.843 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:12:39.103 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:12:39.103 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:12:39.103 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:12:39.103 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:12:39.103 [153/267] Linking static target lib/librte_timer.a 00:12:39.366 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:12:39.366 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:12:39.366 [156/267] Linking static target lib/librte_compressdev.a 00:12:39.366 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:12:39.366 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:12:39.626 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:12:39.626 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:12:39.626 [161/267] Linking static target lib/librte_ethdev.a 00:12:39.626 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:12:39.626 [163/267] Linking static target lib/librte_hash.a 00:12:39.626 [164/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:12:39.886 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:12:39.886 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:12:39.886 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:12:39.886 [168/267] Linking static target lib/librte_dmadev.a 00:12:39.886 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:12:40.147 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:12:40.147 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:12:40.147 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.147 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:12:40.405 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.406 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:12:40.406 [176/267] Linking static target lib/librte_cryptodev.a 00:12:40.666 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:12:40.666 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:12:40.666 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:12:40.666 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:12:40.666 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:12:40.666 [182/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.666 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:12:40.666 [184/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.926 [185/267] Linking static target lib/librte_power.a 00:12:40.926 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:12:40.926 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:12:40.926 [188/267] Linking static target lib/librte_reorder.a 00:12:41.186 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:12:41.186 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:12:41.186 [191/267] Linking static target lib/librte_security.a 00:12:41.186 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:12:41.447 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:12:41.710 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:12:41.710 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:12:41.973 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:12:41.973 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:12:41.973 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:12:41.973 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:12:42.235 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:12:42.235 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:12:42.235 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:12:42.235 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:12:42.235 [204/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:12:42.235 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:12:42.495 [206/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:42.495 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:12:42.495 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:12:42.495 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:12:42.495 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:12:42.754 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:12:42.754 [212/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:12:42.754 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:12:42.754 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:12:42.754 [215/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:42.754 [216/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:42.754 [217/267] Linking static target drivers/librte_bus_vdev.a 00:12:42.754 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:42.754 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:42.754 [220/267] Linking static target drivers/librte_bus_pci.a 00:12:43.013 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:12:43.013 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:43.013 [223/267] Linking static target drivers/librte_mempool_ring.a 00:12:43.013 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:43.013 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:43.274 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:43.535 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:12:44.920 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:12:44.920 [229/267] Linking target lib/librte_eal.so.24.1 00:12:44.920 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:12:44.920 [231/267] Linking target lib/librte_meter.so.24.1 00:12:44.920 [232/267] Linking target lib/librte_ring.so.24.1 00:12:44.920 [233/267] Linking target lib/librte_timer.so.24.1 00:12:44.920 [234/267] Linking target lib/librte_pci.so.24.1 00:12:44.920 [235/267] Linking target lib/librte_dmadev.so.24.1 00:12:44.920 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:12:44.920 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:12:44.920 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:12:44.920 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:12:44.920 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:12:44.920 [241/267] Linking target lib/librte_rcu.so.24.1 00:12:44.920 [242/267] Linking target lib/librte_mempool.so.24.1 00:12:44.920 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:12:44.920 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:12:45.182 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:12:45.182 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:12:45.182 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:12:45.182 [248/267] Linking target lib/librte_mbuf.so.24.1 00:12:45.182 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:12:45.182 [250/267] Linking target lib/librte_reorder.so.24.1 00:12:45.182 [251/267] Linking target lib/librte_net.so.24.1 00:12:45.182 [252/267] Linking target lib/librte_compressdev.so.24.1 00:12:45.182 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:12:45.443 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:12:45.443 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:12:45.443 [256/267] Linking target lib/librte_hash.so.24.1 00:12:45.443 [257/267] Linking target lib/librte_security.so.24.1 00:12:45.443 [258/267] Linking target lib/librte_cmdline.so.24.1 00:12:45.443 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:45.443 [260/267] Linking target lib/librte_ethdev.so.24.1 00:12:45.443 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:12:45.703 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:12:45.703 [263/267] Linking target lib/librte_power.so.24.1 00:12:47.618 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:12:47.618 [265/267] Linking static target lib/librte_vhost.a 00:12:48.560 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:12:48.560 [267/267] Linking target lib/librte_vhost.so.24.1 00:12:48.560 INFO: autodetecting backend as ninja 00:12:48.560 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:13:06.678 CC lib/log/log.o 00:13:06.678 CC lib/log/log_deprecated.o 00:13:06.678 CC lib/log/log_flags.o 00:13:06.678 CC lib/ut_mock/mock.o 00:13:06.678 CC lib/ut/ut.o 00:13:06.678 LIB libspdk_ut.a 00:13:06.678 LIB libspdk_log.a 00:13:06.678 LIB libspdk_ut_mock.a 00:13:06.679 SO libspdk_ut.so.2.0 00:13:06.679 SO libspdk_log.so.7.1 00:13:06.679 SO libspdk_ut_mock.so.6.0 00:13:06.679 SYMLINK libspdk_ut.so 00:13:06.679 SYMLINK libspdk_ut_mock.so 00:13:06.679 SYMLINK libspdk_log.so 00:13:06.679 CC lib/dma/dma.o 00:13:06.679 CC lib/util/base64.o 00:13:06.679 CC lib/util/bit_array.o 00:13:06.679 CC lib/util/cpuset.o 00:13:06.679 CC lib/util/crc16.o 00:13:06.679 CXX lib/trace_parser/trace.o 00:13:06.679 CC lib/util/crc32.o 00:13:06.679 CC lib/util/crc32c.o 00:13:06.679 CC lib/ioat/ioat.o 00:13:06.679 CC lib/vfio_user/host/vfio_user_pci.o 00:13:06.679 CC lib/util/crc32_ieee.o 00:13:06.679 CC lib/util/crc64.o 00:13:06.679 CC lib/util/dif.o 00:13:06.679 CC lib/util/fd.o 00:13:06.679 LIB libspdk_dma.a 00:13:06.679 CC lib/util/fd_group.o 00:13:06.679 SO libspdk_dma.so.5.0 00:13:06.679 CC lib/util/file.o 00:13:06.679 CC lib/vfio_user/host/vfio_user.o 00:13:06.679 CC lib/util/hexlify.o 00:13:06.679 SYMLINK libspdk_dma.so 00:13:06.679 CC lib/util/iov.o 00:13:06.940 LIB libspdk_ioat.a 00:13:06.940 CC lib/util/math.o 00:13:06.940 SO libspdk_ioat.so.7.0 00:13:06.940 CC lib/util/net.o 00:13:06.940 SYMLINK libspdk_ioat.so 00:13:06.940 CC lib/util/pipe.o 00:13:06.940 CC lib/util/strerror_tls.o 00:13:06.940 CC lib/util/string.o 00:13:06.940 CC lib/util/uuid.o 00:13:06.940 CC lib/util/xor.o 00:13:06.940 LIB libspdk_vfio_user.a 00:13:06.940 CC lib/util/zipf.o 00:13:06.940 CC lib/util/md5.o 00:13:06.940 SO libspdk_vfio_user.so.5.0 00:13:06.940 SYMLINK libspdk_vfio_user.so 00:13:07.204 LIB libspdk_util.a 00:13:07.466 SO libspdk_util.so.10.1 00:13:07.466 SYMLINK libspdk_util.so 00:13:07.466 LIB libspdk_trace_parser.a 00:13:07.466 SO libspdk_trace_parser.so.6.0 00:13:07.731 SYMLINK libspdk_trace_parser.so 00:13:07.731 CC lib/json/json_parse.o 00:13:07.731 CC lib/json/json_util.o 00:13:07.731 CC lib/vmd/vmd.o 00:13:07.731 CC lib/json/json_write.o 00:13:07.731 CC lib/vmd/led.o 00:13:07.731 CC lib/rdma_utils/rdma_utils.o 00:13:07.731 CC lib/conf/conf.o 00:13:07.731 CC lib/env_dpdk/env.o 00:13:07.731 CC lib/idxd/idxd.o 00:13:07.731 CC lib/env_dpdk/memory.o 00:13:07.731 CC lib/env_dpdk/pci.o 00:13:07.991 LIB libspdk_conf.a 00:13:07.991 CC lib/idxd/idxd_user.o 00:13:07.991 CC lib/idxd/idxd_kernel.o 00:13:07.991 SO libspdk_conf.so.6.0 00:13:07.991 LIB libspdk_rdma_utils.a 00:13:07.991 SO libspdk_rdma_utils.so.1.0 00:13:07.991 SYMLINK libspdk_conf.so 00:13:07.991 LIB libspdk_json.a 00:13:07.991 CC lib/env_dpdk/init.o 00:13:07.991 SYMLINK libspdk_rdma_utils.so 00:13:07.991 CC lib/env_dpdk/threads.o 00:13:07.991 SO libspdk_json.so.6.0 00:13:07.991 CC lib/env_dpdk/pci_ioat.o 00:13:07.991 SYMLINK libspdk_json.so 00:13:08.251 CC lib/env_dpdk/pci_virtio.o 00:13:08.251 CC lib/env_dpdk/pci_vmd.o 00:13:08.251 CC lib/env_dpdk/pci_idxd.o 00:13:08.251 CC lib/rdma_provider/common.o 00:13:08.251 CC lib/rdma_provider/rdma_provider_verbs.o 00:13:08.251 CC lib/jsonrpc/jsonrpc_server.o 00:13:08.251 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:08.251 LIB libspdk_vmd.a 00:13:08.251 CC lib/env_dpdk/pci_event.o 00:13:08.251 CC lib/env_dpdk/sigbus_handler.o 00:13:08.251 SO libspdk_vmd.so.6.0 00:13:08.251 LIB libspdk_idxd.a 00:13:08.513 CC lib/jsonrpc/jsonrpc_client.o 00:13:08.513 SYMLINK libspdk_vmd.so 00:13:08.513 CC lib/env_dpdk/pci_dpdk.o 00:13:08.513 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:08.513 SO libspdk_idxd.so.12.1 00:13:08.513 LIB libspdk_rdma_provider.a 00:13:08.513 SO libspdk_rdma_provider.so.7.0 00:13:08.513 SYMLINK libspdk_idxd.so 00:13:08.513 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:08.513 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:08.513 SYMLINK libspdk_rdma_provider.so 00:13:08.776 LIB libspdk_jsonrpc.a 00:13:08.776 SO libspdk_jsonrpc.so.6.0 00:13:08.776 SYMLINK libspdk_jsonrpc.so 00:13:09.039 CC lib/rpc/rpc.o 00:13:09.300 LIB libspdk_env_dpdk.a 00:13:09.300 LIB libspdk_rpc.a 00:13:09.300 SO libspdk_env_dpdk.so.15.1 00:13:09.300 SO libspdk_rpc.so.6.0 00:13:09.300 SYMLINK libspdk_rpc.so 00:13:09.300 SYMLINK libspdk_env_dpdk.so 00:13:09.563 CC lib/keyring/keyring.o 00:13:09.563 CC lib/notify/notify_rpc.o 00:13:09.563 CC lib/notify/notify.o 00:13:09.563 CC lib/keyring/keyring_rpc.o 00:13:09.563 CC lib/trace/trace_flags.o 00:13:09.563 CC lib/trace/trace.o 00:13:09.563 CC lib/trace/trace_rpc.o 00:13:09.563 LIB libspdk_notify.a 00:13:09.825 SO libspdk_notify.so.6.0 00:13:09.825 LIB libspdk_keyring.a 00:13:09.825 SYMLINK libspdk_notify.so 00:13:09.825 SO libspdk_keyring.so.2.0 00:13:09.825 LIB libspdk_trace.a 00:13:09.825 SO libspdk_trace.so.11.0 00:13:09.825 SYMLINK libspdk_keyring.so 00:13:09.825 SYMLINK libspdk_trace.so 00:13:10.087 CC lib/thread/iobuf.o 00:13:10.087 CC lib/thread/thread.o 00:13:10.087 CC lib/sock/sock.o 00:13:10.087 CC lib/sock/sock_rpc.o 00:13:10.660 LIB libspdk_sock.a 00:13:10.660 SO libspdk_sock.so.10.0 00:13:10.660 SYMLINK libspdk_sock.so 00:13:10.958 CC lib/nvme/nvme_ctrlr_cmd.o 00:13:10.958 CC lib/nvme/nvme_ctrlr.o 00:13:10.958 CC lib/nvme/nvme_fabric.o 00:13:10.958 CC lib/nvme/nvme_ns_cmd.o 00:13:10.958 CC lib/nvme/nvme_ns.o 00:13:10.958 CC lib/nvme/nvme_pcie.o 00:13:10.958 CC lib/nvme/nvme_pcie_common.o 00:13:10.958 CC lib/nvme/nvme.o 00:13:10.958 CC lib/nvme/nvme_qpair.o 00:13:11.221 CC lib/nvme/nvme_quirks.o 00:13:11.482 CC lib/nvme/nvme_transport.o 00:13:11.482 CC lib/nvme/nvme_discovery.o 00:13:11.482 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:13:11.482 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:13:11.751 CC lib/nvme/nvme_tcp.o 00:13:11.751 LIB libspdk_thread.a 00:13:11.751 CC lib/nvme/nvme_opal.o 00:13:11.751 CC lib/nvme/nvme_io_msg.o 00:13:11.751 SO libspdk_thread.so.11.0 00:13:11.751 SYMLINK libspdk_thread.so 00:13:11.751 CC lib/nvme/nvme_poll_group.o 00:13:11.751 CC lib/nvme/nvme_zns.o 00:13:12.014 CC lib/nvme/nvme_stubs.o 00:13:12.014 CC lib/nvme/nvme_auth.o 00:13:12.014 CC lib/nvme/nvme_cuse.o 00:13:12.014 CC lib/nvme/nvme_rdma.o 00:13:12.276 CC lib/blob/blobstore.o 00:13:12.276 CC lib/blob/request.o 00:13:12.276 CC lib/accel/accel.o 00:13:12.541 CC lib/accel/accel_rpc.o 00:13:12.541 CC lib/init/json_config.o 00:13:12.541 CC lib/init/subsystem.o 00:13:12.541 CC lib/init/subsystem_rpc.o 00:13:12.806 CC lib/init/rpc.o 00:13:12.806 LIB libspdk_init.a 00:13:12.807 CC lib/virtio/virtio.o 00:13:12.807 SO libspdk_init.so.6.0 00:13:12.807 CC lib/virtio/virtio_vhost_user.o 00:13:12.807 CC lib/fsdev/fsdev.o 00:13:12.807 CC lib/fsdev/fsdev_io.o 00:13:12.807 SYMLINK libspdk_init.so 00:13:12.807 CC lib/fsdev/fsdev_rpc.o 00:13:13.069 CC lib/virtio/virtio_vfio_user.o 00:13:13.069 CC lib/virtio/virtio_pci.o 00:13:13.069 CC lib/accel/accel_sw.o 00:13:13.069 CC lib/blob/zeroes.o 00:13:13.332 CC lib/blob/blob_bs_dev.o 00:13:13.332 CC lib/event/app.o 00:13:13.332 CC lib/event/reactor.o 00:13:13.332 CC lib/event/log_rpc.o 00:13:13.332 LIB libspdk_virtio.a 00:13:13.332 LIB libspdk_fsdev.a 00:13:13.332 SO libspdk_virtio.so.7.0 00:13:13.332 SO libspdk_fsdev.so.2.0 00:13:13.332 CC lib/event/app_rpc.o 00:13:13.662 SYMLINK libspdk_virtio.so 00:13:13.662 CC lib/event/scheduler_static.o 00:13:13.662 SYMLINK libspdk_fsdev.so 00:13:13.662 LIB libspdk_accel.a 00:13:13.662 LIB libspdk_nvme.a 00:13:13.662 SO libspdk_accel.so.16.0 00:13:13.662 SYMLINK libspdk_accel.so 00:13:13.662 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:13:13.662 SO libspdk_nvme.so.15.0 00:13:13.662 LIB libspdk_event.a 00:13:13.662 SO libspdk_event.so.14.0 00:13:13.923 SYMLINK libspdk_event.so 00:13:13.923 CC lib/bdev/bdev.o 00:13:13.923 CC lib/bdev/bdev_rpc.o 00:13:13.923 CC lib/bdev/bdev_zone.o 00:13:13.923 CC lib/bdev/scsi_nvme.o 00:13:13.923 CC lib/bdev/part.o 00:13:13.923 SYMLINK libspdk_nvme.so 00:13:14.495 LIB libspdk_fuse_dispatcher.a 00:13:14.495 SO libspdk_fuse_dispatcher.so.1.0 00:13:14.495 SYMLINK libspdk_fuse_dispatcher.so 00:13:15.881 LIB libspdk_blob.a 00:13:15.881 SO libspdk_blob.so.11.0 00:13:15.881 SYMLINK libspdk_blob.so 00:13:16.142 CC lib/blobfs/blobfs.o 00:13:16.142 CC lib/blobfs/tree.o 00:13:16.142 CC lib/lvol/lvol.o 00:13:16.712 LIB libspdk_bdev.a 00:13:16.712 SO libspdk_bdev.so.17.0 00:13:16.973 SYMLINK libspdk_bdev.so 00:13:16.973 LIB libspdk_blobfs.a 00:13:16.973 CC lib/ublk/ublk.o 00:13:16.973 CC lib/ublk/ublk_rpc.o 00:13:16.973 CC lib/scsi/lun.o 00:13:16.973 CC lib/scsi/dev.o 00:13:16.973 CC lib/nbd/nbd.o 00:13:16.973 CC lib/ftl/ftl_core.o 00:13:16.973 CC lib/scsi/port.o 00:13:16.973 CC lib/nvmf/ctrlr.o 00:13:16.973 SO libspdk_blobfs.so.10.0 00:13:17.232 SYMLINK libspdk_blobfs.so 00:13:17.232 CC lib/ftl/ftl_init.o 00:13:17.232 LIB libspdk_lvol.a 00:13:17.232 SO libspdk_lvol.so.10.0 00:13:17.232 CC lib/ftl/ftl_layout.o 00:13:17.232 CC lib/ftl/ftl_debug.o 00:13:17.232 SYMLINK libspdk_lvol.so 00:13:17.232 CC lib/ftl/ftl_io.o 00:13:17.232 CC lib/nvmf/ctrlr_discovery.o 00:13:17.232 CC lib/ftl/ftl_sb.o 00:13:17.232 CC lib/scsi/scsi.o 00:13:17.494 CC lib/nbd/nbd_rpc.o 00:13:17.494 CC lib/ftl/ftl_l2p.o 00:13:17.494 CC lib/scsi/scsi_bdev.o 00:13:17.494 CC lib/ftl/ftl_l2p_flat.o 00:13:17.494 CC lib/ftl/ftl_nv_cache.o 00:13:17.494 CC lib/ftl/ftl_band.o 00:13:17.494 CC lib/ftl/ftl_band_ops.o 00:13:17.494 LIB libspdk_nbd.a 00:13:17.494 SO libspdk_nbd.so.7.0 00:13:17.754 CC lib/ftl/ftl_writer.o 00:13:17.754 CC lib/ftl/ftl_rq.o 00:13:17.754 SYMLINK libspdk_nbd.so 00:13:17.754 CC lib/ftl/ftl_reloc.o 00:13:17.754 LIB libspdk_ublk.a 00:13:17.754 CC lib/scsi/scsi_pr.o 00:13:17.754 SO libspdk_ublk.so.3.0 00:13:17.754 CC lib/ftl/ftl_l2p_cache.o 00:13:17.754 SYMLINK libspdk_ublk.so 00:13:17.754 CC lib/ftl/ftl_p2l.o 00:13:17.754 CC lib/ftl/ftl_p2l_log.o 00:13:18.014 CC lib/scsi/scsi_rpc.o 00:13:18.014 CC lib/ftl/mngt/ftl_mngt.o 00:13:18.014 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:13:18.014 CC lib/nvmf/ctrlr_bdev.o 00:13:18.014 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:13:18.014 CC lib/scsi/task.o 00:13:18.273 CC lib/ftl/mngt/ftl_mngt_startup.o 00:13:18.273 CC lib/nvmf/subsystem.o 00:13:18.273 CC lib/nvmf/nvmf.o 00:13:18.273 CC lib/ftl/mngt/ftl_mngt_md.o 00:13:18.273 CC lib/ftl/mngt/ftl_mngt_misc.o 00:13:18.273 LIB libspdk_scsi.a 00:13:18.273 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:13:18.273 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:13:18.273 SO libspdk_scsi.so.9.0 00:13:18.534 SYMLINK libspdk_scsi.so 00:13:18.534 CC lib/ftl/mngt/ftl_mngt_band.o 00:13:18.534 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:13:18.534 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:13:18.534 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:13:18.534 CC lib/iscsi/conn.o 00:13:18.534 CC lib/vhost/vhost.o 00:13:18.794 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:13:18.794 CC lib/nvmf/nvmf_rpc.o 00:13:18.794 CC lib/nvmf/transport.o 00:13:18.794 CC lib/ftl/utils/ftl_conf.o 00:13:18.794 CC lib/ftl/utils/ftl_md.o 00:13:19.055 CC lib/ftl/utils/ftl_mempool.o 00:13:19.055 CC lib/iscsi/init_grp.o 00:13:19.055 CC lib/ftl/utils/ftl_bitmap.o 00:13:19.055 CC lib/ftl/utils/ftl_property.o 00:13:19.055 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:13:19.315 CC lib/iscsi/iscsi.o 00:13:19.315 CC lib/iscsi/param.o 00:13:19.315 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:13:19.315 CC lib/iscsi/portal_grp.o 00:13:19.315 CC lib/iscsi/tgt_node.o 00:13:19.315 CC lib/iscsi/iscsi_subsystem.o 00:13:19.575 CC lib/nvmf/tcp.o 00:13:19.575 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:13:19.575 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:13:19.575 CC lib/vhost/vhost_rpc.o 00:13:19.575 CC lib/vhost/vhost_scsi.o 00:13:19.575 CC lib/vhost/vhost_blk.o 00:13:19.575 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:13:19.835 CC lib/vhost/rte_vhost_user.o 00:13:19.835 CC lib/nvmf/stubs.o 00:13:19.835 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:13:19.835 CC lib/iscsi/iscsi_rpc.o 00:13:20.096 CC lib/nvmf/mdns_server.o 00:13:20.096 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:13:20.096 CC lib/ftl/upgrade/ftl_sb_v3.o 00:13:20.096 CC lib/nvmf/rdma.o 00:13:20.359 CC lib/nvmf/auth.o 00:13:20.359 CC lib/ftl/upgrade/ftl_sb_v5.o 00:13:20.359 CC lib/ftl/nvc/ftl_nvc_dev.o 00:13:20.359 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:13:20.621 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:13:20.621 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:13:20.621 CC lib/iscsi/task.o 00:13:20.621 CC lib/ftl/base/ftl_base_dev.o 00:13:20.621 CC lib/ftl/base/ftl_base_bdev.o 00:13:20.621 CC lib/ftl/ftl_trace.o 00:13:20.883 LIB libspdk_iscsi.a 00:13:20.883 LIB libspdk_vhost.a 00:13:20.883 SO libspdk_vhost.so.8.0 00:13:20.883 SO libspdk_iscsi.so.8.0 00:13:20.883 LIB libspdk_ftl.a 00:13:20.883 SYMLINK libspdk_vhost.so 00:13:21.145 SYMLINK libspdk_iscsi.so 00:13:21.145 SO libspdk_ftl.so.9.0 00:13:21.407 SYMLINK libspdk_ftl.so 00:13:22.372 LIB libspdk_nvmf.a 00:13:22.635 SO libspdk_nvmf.so.20.0 00:13:22.895 SYMLINK libspdk_nvmf.so 00:13:23.196 CC module/env_dpdk/env_dpdk_rpc.o 00:13:23.196 CC module/fsdev/aio/fsdev_aio.o 00:13:23.196 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:13:23.196 CC module/keyring/file/keyring.o 00:13:23.196 CC module/sock/posix/posix.o 00:13:23.196 CC module/keyring/linux/keyring.o 00:13:23.196 CC module/blob/bdev/blob_bdev.o 00:13:23.196 CC module/scheduler/gscheduler/gscheduler.o 00:13:23.196 CC module/scheduler/dynamic/scheduler_dynamic.o 00:13:23.196 CC module/accel/error/accel_error.o 00:13:23.196 LIB libspdk_env_dpdk_rpc.a 00:13:23.196 SO libspdk_env_dpdk_rpc.so.6.0 00:13:23.196 SYMLINK libspdk_env_dpdk_rpc.so 00:13:23.196 CC module/keyring/linux/keyring_rpc.o 00:13:23.196 CC module/accel/error/accel_error_rpc.o 00:13:23.196 CC module/keyring/file/keyring_rpc.o 00:13:23.196 LIB libspdk_scheduler_dpdk_governor.a 00:13:23.457 SO libspdk_scheduler_dpdk_governor.so.4.0 00:13:23.457 LIB libspdk_scheduler_gscheduler.a 00:13:23.457 SO libspdk_scheduler_gscheduler.so.4.0 00:13:23.457 LIB libspdk_scheduler_dynamic.a 00:13:23.457 SO libspdk_scheduler_dynamic.so.4.0 00:13:23.457 SYMLINK libspdk_scheduler_dpdk_governor.so 00:13:23.457 CC module/fsdev/aio/fsdev_aio_rpc.o 00:13:23.457 LIB libspdk_keyring_linux.a 00:13:23.457 SYMLINK libspdk_scheduler_gscheduler.so 00:13:23.457 SYMLINK libspdk_scheduler_dynamic.so 00:13:23.457 LIB libspdk_accel_error.a 00:13:23.457 LIB libspdk_keyring_file.a 00:13:23.457 SO libspdk_keyring_linux.so.1.0 00:13:23.457 LIB libspdk_blob_bdev.a 00:13:23.457 SO libspdk_keyring_file.so.2.0 00:13:23.457 SO libspdk_accel_error.so.2.0 00:13:23.457 SO libspdk_blob_bdev.so.11.0 00:13:23.457 SYMLINK libspdk_keyring_linux.so 00:13:23.457 CC module/fsdev/aio/linux_aio_mgr.o 00:13:23.457 SYMLINK libspdk_accel_error.so 00:13:23.457 SYMLINK libspdk_keyring_file.so 00:13:23.457 SYMLINK libspdk_blob_bdev.so 00:13:23.457 CC module/accel/ioat/accel_ioat.o 00:13:23.457 CC module/accel/ioat/accel_ioat_rpc.o 00:13:23.457 CC module/accel/dsa/accel_dsa.o 00:13:23.457 CC module/accel/dsa/accel_dsa_rpc.o 00:13:23.457 CC module/accel/iaa/accel_iaa.o 00:13:23.719 CC module/accel/iaa/accel_iaa_rpc.o 00:13:23.719 LIB libspdk_accel_ioat.a 00:13:23.719 CC module/blobfs/bdev/blobfs_bdev.o 00:13:23.719 CC module/bdev/delay/vbdev_delay.o 00:13:23.719 SO libspdk_accel_ioat.so.6.0 00:13:23.719 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:13:23.719 SYMLINK libspdk_accel_ioat.so 00:13:23.719 CC module/bdev/error/vbdev_error.o 00:13:23.719 CC module/bdev/gpt/gpt.o 00:13:23.719 LIB libspdk_accel_iaa.a 00:13:23.719 LIB libspdk_accel_dsa.a 00:13:23.978 SO libspdk_accel_iaa.so.3.0 00:13:23.978 SO libspdk_accel_dsa.so.5.0 00:13:23.978 LIB libspdk_fsdev_aio.a 00:13:23.978 CC module/bdev/gpt/vbdev_gpt.o 00:13:23.978 SO libspdk_fsdev_aio.so.1.0 00:13:23.978 LIB libspdk_sock_posix.a 00:13:23.978 SYMLINK libspdk_accel_dsa.so 00:13:23.978 SYMLINK libspdk_accel_iaa.so 00:13:23.978 CC module/bdev/error/vbdev_error_rpc.o 00:13:23.978 CC module/bdev/delay/vbdev_delay_rpc.o 00:13:23.978 LIB libspdk_blobfs_bdev.a 00:13:23.978 SO libspdk_sock_posix.so.6.0 00:13:23.978 CC module/bdev/lvol/vbdev_lvol.o 00:13:23.978 SO libspdk_blobfs_bdev.so.6.0 00:13:23.978 SYMLINK libspdk_fsdev_aio.so 00:13:23.978 SYMLINK libspdk_blobfs_bdev.so 00:13:23.978 SYMLINK libspdk_sock_posix.so 00:13:23.978 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:13:23.978 LIB libspdk_bdev_error.a 00:13:24.240 SO libspdk_bdev_error.so.6.0 00:13:24.240 LIB libspdk_bdev_delay.a 00:13:24.240 CC module/bdev/malloc/bdev_malloc.o 00:13:24.240 SO libspdk_bdev_delay.so.6.0 00:13:24.240 LIB libspdk_bdev_gpt.a 00:13:24.240 CC module/bdev/null/bdev_null.o 00:13:24.240 CC module/bdev/nvme/bdev_nvme.o 00:13:24.240 SO libspdk_bdev_gpt.so.6.0 00:13:24.240 SYMLINK libspdk_bdev_error.so 00:13:24.240 CC module/bdev/nvme/bdev_nvme_rpc.o 00:13:24.240 CC module/bdev/passthru/vbdev_passthru.o 00:13:24.240 SYMLINK libspdk_bdev_delay.so 00:13:24.240 CC module/bdev/raid/bdev_raid.o 00:13:24.240 CC module/bdev/raid/bdev_raid_rpc.o 00:13:24.240 SYMLINK libspdk_bdev_gpt.so 00:13:24.240 CC module/bdev/raid/bdev_raid_sb.o 00:13:24.501 CC module/bdev/null/bdev_null_rpc.o 00:13:24.501 LIB libspdk_bdev_lvol.a 00:13:24.501 SO libspdk_bdev_lvol.so.6.0 00:13:24.501 CC module/bdev/malloc/bdev_malloc_rpc.o 00:13:24.501 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:13:24.501 CC module/bdev/split/vbdev_split.o 00:13:24.501 LIB libspdk_bdev_null.a 00:13:24.501 SYMLINK libspdk_bdev_lvol.so 00:13:24.501 SO libspdk_bdev_null.so.6.0 00:13:24.762 CC module/bdev/zone_block/vbdev_zone_block.o 00:13:24.762 CC module/bdev/aio/bdev_aio.o 00:13:24.762 SYMLINK libspdk_bdev_null.so 00:13:24.763 LIB libspdk_bdev_passthru.a 00:13:24.763 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:13:24.763 LIB libspdk_bdev_malloc.a 00:13:24.763 SO libspdk_bdev_malloc.so.6.0 00:13:24.763 SO libspdk_bdev_passthru.so.6.0 00:13:24.763 CC module/bdev/ftl/bdev_ftl.o 00:13:24.763 SYMLINK libspdk_bdev_malloc.so 00:13:24.763 SYMLINK libspdk_bdev_passthru.so 00:13:24.763 CC module/bdev/split/vbdev_split_rpc.o 00:13:24.763 CC module/bdev/nvme/nvme_rpc.o 00:13:24.763 CC module/bdev/aio/bdev_aio_rpc.o 00:13:24.763 CC module/bdev/nvme/bdev_mdns_client.o 00:13:24.763 CC module/bdev/raid/raid0.o 00:13:25.025 LIB libspdk_bdev_split.a 00:13:25.025 CC module/bdev/raid/raid1.o 00:13:25.025 CC module/bdev/raid/concat.o 00:13:25.025 LIB libspdk_bdev_zone_block.a 00:13:25.025 SO libspdk_bdev_split.so.6.0 00:13:25.025 SO libspdk_bdev_zone_block.so.6.0 00:13:25.025 LIB libspdk_bdev_aio.a 00:13:25.025 CC module/bdev/ftl/bdev_ftl_rpc.o 00:13:25.025 SYMLINK libspdk_bdev_split.so 00:13:25.025 SO libspdk_bdev_aio.so.6.0 00:13:25.025 SYMLINK libspdk_bdev_zone_block.so 00:13:25.025 CC module/bdev/raid/raid5f.o 00:13:25.025 CC module/bdev/nvme/vbdev_opal.o 00:13:25.025 SYMLINK libspdk_bdev_aio.so 00:13:25.025 CC module/bdev/nvme/vbdev_opal_rpc.o 00:13:25.025 CC module/bdev/iscsi/bdev_iscsi.o 00:13:25.286 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:13:25.286 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:13:25.286 CC module/bdev/virtio/bdev_virtio_scsi.o 00:13:25.286 LIB libspdk_bdev_ftl.a 00:13:25.286 SO libspdk_bdev_ftl.so.6.0 00:13:25.286 CC module/bdev/virtio/bdev_virtio_blk.o 00:13:25.286 CC module/bdev/virtio/bdev_virtio_rpc.o 00:13:25.286 SYMLINK libspdk_bdev_ftl.so 00:13:25.547 LIB libspdk_bdev_iscsi.a 00:13:25.547 SO libspdk_bdev_iscsi.so.6.0 00:13:25.547 LIB libspdk_bdev_raid.a 00:13:25.547 SYMLINK libspdk_bdev_iscsi.so 00:13:25.547 SO libspdk_bdev_raid.so.6.0 00:13:25.810 SYMLINK libspdk_bdev_raid.so 00:13:25.810 LIB libspdk_bdev_virtio.a 00:13:25.810 SO libspdk_bdev_virtio.so.6.0 00:13:25.810 SYMLINK libspdk_bdev_virtio.so 00:13:26.755 LIB libspdk_bdev_nvme.a 00:13:26.755 SO libspdk_bdev_nvme.so.7.1 00:13:27.015 SYMLINK libspdk_bdev_nvme.so 00:13:27.275 CC module/event/subsystems/sock/sock.o 00:13:27.275 CC module/event/subsystems/fsdev/fsdev.o 00:13:27.275 CC module/event/subsystems/iobuf/iobuf.o 00:13:27.275 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:13:27.275 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:13:27.275 CC module/event/subsystems/vmd/vmd.o 00:13:27.275 CC module/event/subsystems/vmd/vmd_rpc.o 00:13:27.275 CC module/event/subsystems/keyring/keyring.o 00:13:27.275 CC module/event/subsystems/scheduler/scheduler.o 00:13:27.540 LIB libspdk_event_keyring.a 00:13:27.540 LIB libspdk_event_fsdev.a 00:13:27.540 LIB libspdk_event_vhost_blk.a 00:13:27.540 LIB libspdk_event_scheduler.a 00:13:27.540 LIB libspdk_event_sock.a 00:13:27.540 LIB libspdk_event_vmd.a 00:13:27.540 SO libspdk_event_keyring.so.1.0 00:13:27.540 SO libspdk_event_fsdev.so.1.0 00:13:27.540 SO libspdk_event_vhost_blk.so.3.0 00:13:27.540 SO libspdk_event_sock.so.5.0 00:13:27.540 SO libspdk_event_scheduler.so.4.0 00:13:27.540 LIB libspdk_event_iobuf.a 00:13:27.540 SO libspdk_event_vmd.so.6.0 00:13:27.540 SYMLINK libspdk_event_keyring.so 00:13:27.540 SO libspdk_event_iobuf.so.3.0 00:13:27.540 SYMLINK libspdk_event_fsdev.so 00:13:27.540 SYMLINK libspdk_event_vhost_blk.so 00:13:27.540 SYMLINK libspdk_event_sock.so 00:13:27.540 SYMLINK libspdk_event_scheduler.so 00:13:27.540 SYMLINK libspdk_event_vmd.so 00:13:27.540 SYMLINK libspdk_event_iobuf.so 00:13:27.801 CC module/event/subsystems/accel/accel.o 00:13:28.061 LIB libspdk_event_accel.a 00:13:28.061 SO libspdk_event_accel.so.6.0 00:13:28.061 SYMLINK libspdk_event_accel.so 00:13:28.321 CC module/event/subsystems/bdev/bdev.o 00:13:28.321 LIB libspdk_event_bdev.a 00:13:28.321 SO libspdk_event_bdev.so.6.0 00:13:28.582 SYMLINK libspdk_event_bdev.so 00:13:28.582 CC module/event/subsystems/scsi/scsi.o 00:13:28.582 CC module/event/subsystems/ublk/ublk.o 00:13:28.582 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:13:28.582 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:13:28.582 CC module/event/subsystems/nbd/nbd.o 00:13:28.844 LIB libspdk_event_ublk.a 00:13:28.844 LIB libspdk_event_nbd.a 00:13:28.844 LIB libspdk_event_scsi.a 00:13:28.844 SO libspdk_event_ublk.so.3.0 00:13:28.844 SO libspdk_event_nbd.so.6.0 00:13:28.844 SO libspdk_event_scsi.so.6.0 00:13:28.844 SYMLINK libspdk_event_ublk.so 00:13:28.844 SYMLINK libspdk_event_nbd.so 00:13:28.844 LIB libspdk_event_nvmf.a 00:13:28.844 SYMLINK libspdk_event_scsi.so 00:13:28.844 SO libspdk_event_nvmf.so.6.0 00:13:28.844 SYMLINK libspdk_event_nvmf.so 00:13:29.105 CC module/event/subsystems/iscsi/iscsi.o 00:13:29.105 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:13:29.105 LIB libspdk_event_iscsi.a 00:13:29.105 LIB libspdk_event_vhost_scsi.a 00:13:29.365 SO libspdk_event_iscsi.so.6.0 00:13:29.365 SO libspdk_event_vhost_scsi.so.3.0 00:13:29.365 SYMLINK libspdk_event_iscsi.so 00:13:29.365 SYMLINK libspdk_event_vhost_scsi.so 00:13:29.365 SO libspdk.so.6.0 00:13:29.365 SYMLINK libspdk.so 00:13:29.626 TEST_HEADER include/spdk/accel.h 00:13:29.626 TEST_HEADER include/spdk/accel_module.h 00:13:29.626 TEST_HEADER include/spdk/assert.h 00:13:29.626 CXX app/trace/trace.o 00:13:29.626 TEST_HEADER include/spdk/barrier.h 00:13:29.626 CC app/trace_record/trace_record.o 00:13:29.626 TEST_HEADER include/spdk/base64.h 00:13:29.626 TEST_HEADER include/spdk/bdev.h 00:13:29.626 TEST_HEADER include/spdk/bdev_module.h 00:13:29.626 TEST_HEADER include/spdk/bdev_zone.h 00:13:29.626 TEST_HEADER include/spdk/bit_array.h 00:13:29.626 TEST_HEADER include/spdk/bit_pool.h 00:13:29.626 TEST_HEADER include/spdk/blob_bdev.h 00:13:29.626 TEST_HEADER include/spdk/blobfs_bdev.h 00:13:29.626 TEST_HEADER include/spdk/blobfs.h 00:13:29.626 TEST_HEADER include/spdk/blob.h 00:13:29.626 TEST_HEADER include/spdk/conf.h 00:13:29.626 TEST_HEADER include/spdk/config.h 00:13:29.626 TEST_HEADER include/spdk/cpuset.h 00:13:29.626 CC examples/interrupt_tgt/interrupt_tgt.o 00:13:29.626 TEST_HEADER include/spdk/crc16.h 00:13:29.626 TEST_HEADER include/spdk/crc32.h 00:13:29.626 TEST_HEADER include/spdk/crc64.h 00:13:29.626 TEST_HEADER include/spdk/dif.h 00:13:29.626 TEST_HEADER include/spdk/dma.h 00:13:29.626 TEST_HEADER include/spdk/endian.h 00:13:29.626 TEST_HEADER include/spdk/env_dpdk.h 00:13:29.626 TEST_HEADER include/spdk/env.h 00:13:29.626 TEST_HEADER include/spdk/event.h 00:13:29.626 TEST_HEADER include/spdk/fd_group.h 00:13:29.626 TEST_HEADER include/spdk/fd.h 00:13:29.626 TEST_HEADER include/spdk/file.h 00:13:29.626 TEST_HEADER include/spdk/fsdev.h 00:13:29.626 TEST_HEADER include/spdk/fsdev_module.h 00:13:29.626 TEST_HEADER include/spdk/ftl.h 00:13:29.626 TEST_HEADER include/spdk/fuse_dispatcher.h 00:13:29.626 TEST_HEADER include/spdk/gpt_spec.h 00:13:29.626 TEST_HEADER include/spdk/hexlify.h 00:13:29.626 TEST_HEADER include/spdk/histogram_data.h 00:13:29.626 TEST_HEADER include/spdk/idxd.h 00:13:29.626 TEST_HEADER include/spdk/idxd_spec.h 00:13:29.626 CC examples/util/zipf/zipf.o 00:13:29.626 CC examples/ioat/perf/perf.o 00:13:29.626 TEST_HEADER include/spdk/init.h 00:13:29.626 TEST_HEADER include/spdk/ioat.h 00:13:29.626 TEST_HEADER include/spdk/ioat_spec.h 00:13:29.626 CC test/thread/poller_perf/poller_perf.o 00:13:29.626 TEST_HEADER include/spdk/iscsi_spec.h 00:13:29.626 TEST_HEADER include/spdk/json.h 00:13:29.626 TEST_HEADER include/spdk/jsonrpc.h 00:13:29.626 TEST_HEADER include/spdk/keyring.h 00:13:29.626 TEST_HEADER include/spdk/keyring_module.h 00:13:29.626 TEST_HEADER include/spdk/likely.h 00:13:29.626 TEST_HEADER include/spdk/log.h 00:13:29.626 TEST_HEADER include/spdk/lvol.h 00:13:29.626 TEST_HEADER include/spdk/md5.h 00:13:29.626 TEST_HEADER include/spdk/memory.h 00:13:29.626 TEST_HEADER include/spdk/mmio.h 00:13:29.626 CC test/dma/test_dma/test_dma.o 00:13:29.626 TEST_HEADER include/spdk/nbd.h 00:13:29.626 TEST_HEADER include/spdk/net.h 00:13:29.626 TEST_HEADER include/spdk/notify.h 00:13:29.626 TEST_HEADER include/spdk/nvme.h 00:13:29.626 TEST_HEADER include/spdk/nvme_intel.h 00:13:29.626 TEST_HEADER include/spdk/nvme_ocssd.h 00:13:29.626 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:13:29.626 TEST_HEADER include/spdk/nvme_spec.h 00:13:29.626 TEST_HEADER include/spdk/nvme_zns.h 00:13:29.626 TEST_HEADER include/spdk/nvmf_cmd.h 00:13:29.626 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:13:29.626 TEST_HEADER include/spdk/nvmf.h 00:13:29.626 TEST_HEADER include/spdk/nvmf_spec.h 00:13:29.888 TEST_HEADER include/spdk/nvmf_transport.h 00:13:29.888 TEST_HEADER include/spdk/opal.h 00:13:29.888 TEST_HEADER include/spdk/opal_spec.h 00:13:29.888 TEST_HEADER include/spdk/pci_ids.h 00:13:29.888 CC test/app/bdev_svc/bdev_svc.o 00:13:29.888 TEST_HEADER include/spdk/pipe.h 00:13:29.888 TEST_HEADER include/spdk/queue.h 00:13:29.888 TEST_HEADER include/spdk/reduce.h 00:13:29.888 CC test/env/mem_callbacks/mem_callbacks.o 00:13:29.888 TEST_HEADER include/spdk/rpc.h 00:13:29.888 TEST_HEADER include/spdk/scheduler.h 00:13:29.888 TEST_HEADER include/spdk/scsi.h 00:13:29.888 TEST_HEADER include/spdk/scsi_spec.h 00:13:29.888 TEST_HEADER include/spdk/sock.h 00:13:29.888 TEST_HEADER include/spdk/stdinc.h 00:13:29.888 TEST_HEADER include/spdk/string.h 00:13:29.888 TEST_HEADER include/spdk/thread.h 00:13:29.888 TEST_HEADER include/spdk/trace.h 00:13:29.888 TEST_HEADER include/spdk/trace_parser.h 00:13:29.888 TEST_HEADER include/spdk/tree.h 00:13:29.888 TEST_HEADER include/spdk/ublk.h 00:13:29.888 TEST_HEADER include/spdk/util.h 00:13:29.888 TEST_HEADER include/spdk/uuid.h 00:13:29.888 TEST_HEADER include/spdk/version.h 00:13:29.888 TEST_HEADER include/spdk/vfio_user_pci.h 00:13:29.888 TEST_HEADER include/spdk/vfio_user_spec.h 00:13:29.888 TEST_HEADER include/spdk/vhost.h 00:13:29.888 TEST_HEADER include/spdk/vmd.h 00:13:29.888 TEST_HEADER include/spdk/xor.h 00:13:29.888 TEST_HEADER include/spdk/zipf.h 00:13:29.888 CXX test/cpp_headers/accel.o 00:13:29.888 LINK zipf 00:13:29.888 LINK interrupt_tgt 00:13:29.888 LINK spdk_trace_record 00:13:29.888 LINK ioat_perf 00:13:29.888 LINK poller_perf 00:13:29.888 LINK bdev_svc 00:13:29.888 CXX test/cpp_headers/accel_module.o 00:13:30.149 LINK spdk_trace 00:13:30.149 CC test/env/vtophys/vtophys.o 00:13:30.149 CC examples/ioat/verify/verify.o 00:13:30.149 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:30.149 CXX test/cpp_headers/assert.o 00:13:30.149 CC app/nvmf_tgt/nvmf_main.o 00:13:30.149 CXX test/cpp_headers/barrier.o 00:13:30.149 LINK vtophys 00:13:30.149 CC test/env/memory/memory_ut.o 00:13:30.149 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:13:30.149 LINK env_dpdk_post_init 00:13:30.410 LINK mem_callbacks 00:13:30.410 LINK test_dma 00:13:30.410 LINK verify 00:13:30.410 CXX test/cpp_headers/base64.o 00:13:30.410 LINK nvmf_tgt 00:13:30.410 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:13:30.410 CXX test/cpp_headers/bdev.o 00:13:30.410 CC test/rpc_client/rpc_client_test.o 00:13:30.410 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:13:30.410 CXX test/cpp_headers/bdev_module.o 00:13:30.670 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:13:30.670 CC test/env/pci/pci_ut.o 00:13:30.670 CXX test/cpp_headers/bdev_zone.o 00:13:30.670 LINK rpc_client_test 00:13:30.670 LINK nvme_fuzz 00:13:30.670 CC app/iscsi_tgt/iscsi_tgt.o 00:13:30.670 CC examples/thread/thread/thread_ex.o 00:13:30.670 CXX test/cpp_headers/bit_array.o 00:13:30.670 CXX test/cpp_headers/bit_pool.o 00:13:30.930 CC app/spdk_tgt/spdk_tgt.o 00:13:30.930 LINK iscsi_tgt 00:13:30.930 CC examples/sock/hello_world/hello_sock.o 00:13:30.930 LINK thread 00:13:30.930 CXX test/cpp_headers/blob_bdev.o 00:13:30.930 LINK pci_ut 00:13:30.930 LINK spdk_tgt 00:13:30.930 LINK vhost_fuzz 00:13:31.192 CC test/event/event_perf/event_perf.o 00:13:31.192 CC test/app/histogram_perf/histogram_perf.o 00:13:31.192 CXX test/cpp_headers/blobfs_bdev.o 00:13:31.192 LINK hello_sock 00:13:31.192 CC test/nvme/aer/aer.o 00:13:31.192 LINK event_perf 00:13:31.192 LINK histogram_perf 00:13:31.192 CC app/spdk_lspci/spdk_lspci.o 00:13:31.192 CXX test/cpp_headers/blobfs.o 00:13:31.454 CC test/accel/dif/dif.o 00:13:31.454 LINK memory_ut 00:13:31.454 LINK spdk_lspci 00:13:31.454 CC test/blobfs/mkfs/mkfs.o 00:13:31.454 CC test/event/reactor/reactor.o 00:13:31.454 CXX test/cpp_headers/blob.o 00:13:31.454 CC examples/vmd/lsvmd/lsvmd.o 00:13:31.454 CC examples/vmd/led/led.o 00:13:31.454 LINK aer 00:13:31.454 LINK lsvmd 00:13:31.454 LINK reactor 00:13:31.754 CXX test/cpp_headers/conf.o 00:13:31.754 LINK led 00:13:31.754 LINK mkfs 00:13:31.754 CC app/spdk_nvme_perf/perf.o 00:13:31.754 CC examples/idxd/perf/perf.o 00:13:31.754 CC test/nvme/reset/reset.o 00:13:31.754 CXX test/cpp_headers/config.o 00:13:31.754 CXX test/cpp_headers/cpuset.o 00:13:31.754 CC test/event/reactor_perf/reactor_perf.o 00:13:31.754 CC test/nvme/sgl/sgl.o 00:13:31.754 CC test/nvme/e2edp/nvme_dp.o 00:13:31.754 CC test/nvme/overhead/overhead.o 00:13:32.015 CXX test/cpp_headers/crc16.o 00:13:32.015 LINK reactor_perf 00:13:32.015 LINK reset 00:13:32.015 LINK idxd_perf 00:13:32.015 CXX test/cpp_headers/crc32.o 00:13:32.015 LINK dif 00:13:32.015 LINK sgl 00:13:32.015 LINK iscsi_fuzz 00:13:32.015 LINK nvme_dp 00:13:32.016 CC test/event/app_repeat/app_repeat.o 00:13:32.276 LINK overhead 00:13:32.276 CXX test/cpp_headers/crc64.o 00:13:32.276 LINK app_repeat 00:13:32.276 CC test/nvme/err_injection/err_injection.o 00:13:32.276 CXX test/cpp_headers/dif.o 00:13:32.276 CC test/lvol/esnap/esnap.o 00:13:32.276 CC test/app/jsoncat/jsoncat.o 00:13:32.276 CC examples/fsdev/hello_world/hello_fsdev.o 00:13:32.276 CC examples/accel/perf/accel_perf.o 00:13:32.535 CC examples/blob/hello_world/hello_blob.o 00:13:32.535 CXX test/cpp_headers/dma.o 00:13:32.535 LINK spdk_nvme_perf 00:13:32.535 LINK err_injection 00:13:32.535 LINK jsoncat 00:13:32.535 CC test/bdev/bdevio/bdevio.o 00:13:32.535 CC test/event/scheduler/scheduler.o 00:13:32.535 CXX test/cpp_headers/endian.o 00:13:32.535 LINK hello_fsdev 00:13:32.794 LINK hello_blob 00:13:32.794 CC test/app/stub/stub.o 00:13:32.794 CC app/spdk_nvme_identify/identify.o 00:13:32.794 CC test/nvme/startup/startup.o 00:13:32.794 LINK scheduler 00:13:32.794 CXX test/cpp_headers/env_dpdk.o 00:13:32.794 LINK stub 00:13:32.794 LINK startup 00:13:32.794 LINK accel_perf 00:13:32.794 CXX test/cpp_headers/env.o 00:13:32.794 LINK bdevio 00:13:33.055 CC examples/blob/cli/blobcli.o 00:13:33.055 CC examples/nvme/reconnect/reconnect.o 00:13:33.055 CC examples/nvme/hello_world/hello_world.o 00:13:33.055 CXX test/cpp_headers/event.o 00:13:33.055 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:33.055 CC test/nvme/reserve/reserve.o 00:13:33.055 CC test/nvme/simple_copy/simple_copy.o 00:13:33.055 CC test/nvme/connect_stress/connect_stress.o 00:13:33.315 CXX test/cpp_headers/fd_group.o 00:13:33.315 LINK hello_world 00:13:33.315 LINK reserve 00:13:33.315 LINK connect_stress 00:13:33.315 CXX test/cpp_headers/fd.o 00:13:33.315 LINK simple_copy 00:13:33.315 LINK reconnect 00:13:33.315 CXX test/cpp_headers/file.o 00:13:33.575 LINK blobcli 00:13:33.575 CC test/nvme/boot_partition/boot_partition.o 00:13:33.575 CXX test/cpp_headers/fsdev.o 00:13:33.575 CXX test/cpp_headers/fsdev_module.o 00:13:33.575 CXX test/cpp_headers/ftl.o 00:13:33.575 LINK spdk_nvme_identify 00:13:33.575 LINK nvme_manage 00:13:33.575 CC examples/nvme/arbitration/arbitration.o 00:13:33.575 CC test/nvme/compliance/nvme_compliance.o 00:13:33.575 CC examples/bdev/hello_world/hello_bdev.o 00:13:33.575 LINK boot_partition 00:13:33.834 CXX test/cpp_headers/fuse_dispatcher.o 00:13:33.834 CC test/nvme/fused_ordering/fused_ordering.o 00:13:33.834 CC test/nvme/fdp/fdp.o 00:13:33.834 CC app/spdk_nvme_discover/discovery_aer.o 00:13:33.834 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:33.834 CXX test/cpp_headers/gpt_spec.o 00:13:33.834 LINK hello_bdev 00:13:33.834 LINK arbitration 00:13:33.834 CC app/spdk_top/spdk_top.o 00:13:34.094 LINK fused_ordering 00:13:34.094 CXX test/cpp_headers/hexlify.o 00:13:34.094 LINK spdk_nvme_discover 00:13:34.094 LINK doorbell_aers 00:13:34.094 LINK nvme_compliance 00:13:34.094 CC examples/nvme/hotplug/hotplug.o 00:13:34.094 CXX test/cpp_headers/histogram_data.o 00:13:34.094 LINK fdp 00:13:34.094 CXX test/cpp_headers/idxd.o 00:13:34.094 CXX test/cpp_headers/idxd_spec.o 00:13:34.094 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:34.094 CC examples/bdev/bdevperf/bdevperf.o 00:13:34.094 CC examples/nvme/abort/abort.o 00:13:34.357 CXX test/cpp_headers/init.o 00:13:34.357 CXX test/cpp_headers/ioat.o 00:13:34.357 CC test/nvme/cuse/cuse.o 00:13:34.357 CXX test/cpp_headers/ioat_spec.o 00:13:34.357 LINK hotplug 00:13:34.357 LINK cmb_copy 00:13:34.357 CXX test/cpp_headers/iscsi_spec.o 00:13:34.619 CC app/vhost/vhost.o 00:13:34.619 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:34.619 CC app/spdk_dd/spdk_dd.o 00:13:34.619 LINK abort 00:13:34.619 CXX test/cpp_headers/json.o 00:13:34.619 CC app/fio/nvme/fio_plugin.o 00:13:34.619 LINK vhost 00:13:34.619 LINK pmr_persistence 00:13:34.936 CXX test/cpp_headers/jsonrpc.o 00:13:34.936 CXX test/cpp_headers/keyring.o 00:13:34.936 CXX test/cpp_headers/keyring_module.o 00:13:34.936 CC app/fio/bdev/fio_plugin.o 00:13:34.936 CXX test/cpp_headers/likely.o 00:13:34.936 LINK spdk_top 00:13:34.936 LINK spdk_dd 00:13:34.936 CXX test/cpp_headers/log.o 00:13:34.936 LINK bdevperf 00:13:34.936 CXX test/cpp_headers/lvol.o 00:13:34.936 CXX test/cpp_headers/md5.o 00:13:35.197 CXX test/cpp_headers/memory.o 00:13:35.197 CXX test/cpp_headers/mmio.o 00:13:35.197 CXX test/cpp_headers/nbd.o 00:13:35.197 CXX test/cpp_headers/net.o 00:13:35.197 CXX test/cpp_headers/notify.o 00:13:35.197 LINK spdk_nvme 00:13:35.197 CXX test/cpp_headers/nvme.o 00:13:35.197 CXX test/cpp_headers/nvme_intel.o 00:13:35.197 CXX test/cpp_headers/nvme_ocssd.o 00:13:35.197 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:35.197 CXX test/cpp_headers/nvme_spec.o 00:13:35.197 CXX test/cpp_headers/nvme_zns.o 00:13:35.458 CC examples/nvmf/nvmf/nvmf.o 00:13:35.458 LINK spdk_bdev 00:13:35.458 CXX test/cpp_headers/nvmf_cmd.o 00:13:35.458 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:35.458 CXX test/cpp_headers/nvmf.o 00:13:35.458 CXX test/cpp_headers/nvmf_spec.o 00:13:35.458 CXX test/cpp_headers/nvmf_transport.o 00:13:35.458 CXX test/cpp_headers/opal.o 00:13:35.458 CXX test/cpp_headers/opal_spec.o 00:13:35.458 LINK cuse 00:13:35.458 CXX test/cpp_headers/pci_ids.o 00:13:35.458 CXX test/cpp_headers/pipe.o 00:13:35.458 CXX test/cpp_headers/queue.o 00:13:35.718 CXX test/cpp_headers/reduce.o 00:13:35.718 CXX test/cpp_headers/rpc.o 00:13:35.718 CXX test/cpp_headers/scheduler.o 00:13:35.718 LINK nvmf 00:13:35.718 CXX test/cpp_headers/scsi.o 00:13:35.718 CXX test/cpp_headers/scsi_spec.o 00:13:35.718 CXX test/cpp_headers/sock.o 00:13:35.718 CXX test/cpp_headers/stdinc.o 00:13:35.718 CXX test/cpp_headers/string.o 00:13:35.718 CXX test/cpp_headers/thread.o 00:13:35.718 CXX test/cpp_headers/trace.o 00:13:35.718 CXX test/cpp_headers/trace_parser.o 00:13:35.718 CXX test/cpp_headers/tree.o 00:13:35.718 CXX test/cpp_headers/ublk.o 00:13:35.718 CXX test/cpp_headers/util.o 00:13:35.718 CXX test/cpp_headers/uuid.o 00:13:35.718 CXX test/cpp_headers/version.o 00:13:35.718 CXX test/cpp_headers/vfio_user_pci.o 00:13:35.718 CXX test/cpp_headers/vfio_user_spec.o 00:13:35.982 CXX test/cpp_headers/vhost.o 00:13:35.982 CXX test/cpp_headers/vmd.o 00:13:35.982 CXX test/cpp_headers/xor.o 00:13:35.982 CXX test/cpp_headers/zipf.o 00:13:37.505 LINK esnap 00:13:37.505 00:13:37.505 real 1m17.277s 00:13:37.505 user 6m54.793s 00:13:37.505 sys 1m17.393s 00:13:37.505 05:24:09 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:13:37.505 ************************************ 00:13:37.505 END TEST make 00:13:37.505 ************************************ 00:13:37.506 05:24:09 make -- common/autotest_common.sh@10 -- $ set +x 00:13:37.506 05:24:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:13:37.506 05:24:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:13:37.506 05:24:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:13:37.506 05:24:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:37.506 05:24:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:13:37.506 05:24:09 -- pm/common@44 -- $ pid=5043 00:13:37.506 05:24:09 -- pm/common@50 -- $ kill -TERM 5043 00:13:37.506 05:24:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:37.506 05:24:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:13:37.506 05:24:09 -- pm/common@44 -- $ pid=5044 00:13:37.506 05:24:09 -- pm/common@50 -- $ kill -TERM 5044 00:13:37.506 05:24:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:13:37.506 05:24:09 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:37.773 05:24:09 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:37.773 05:24:09 -- common/autotest_common.sh@1691 -- # lcov --version 00:13:37.773 05:24:09 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:37.773 05:24:09 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:37.773 05:24:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.773 05:24:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.773 05:24:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.773 05:24:09 -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.773 05:24:09 -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.773 05:24:09 -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.773 05:24:09 -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.773 05:24:09 -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.773 05:24:09 -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.773 05:24:09 -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.774 05:24:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.774 05:24:09 -- scripts/common.sh@344 -- # case "$op" in 00:13:37.774 05:24:09 -- scripts/common.sh@345 -- # : 1 00:13:37.774 05:24:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.774 05:24:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.774 05:24:09 -- scripts/common.sh@365 -- # decimal 1 00:13:37.774 05:24:09 -- scripts/common.sh@353 -- # local d=1 00:13:37.774 05:24:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.774 05:24:09 -- scripts/common.sh@355 -- # echo 1 00:13:37.774 05:24:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.774 05:24:09 -- scripts/common.sh@366 -- # decimal 2 00:13:37.774 05:24:09 -- scripts/common.sh@353 -- # local d=2 00:13:37.774 05:24:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.774 05:24:09 -- scripts/common.sh@355 -- # echo 2 00:13:37.774 05:24:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.774 05:24:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.774 05:24:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.774 05:24:09 -- scripts/common.sh@368 -- # return 0 00:13:37.774 05:24:09 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.774 05:24:09 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.774 --rc genhtml_branch_coverage=1 00:13:37.774 --rc genhtml_function_coverage=1 00:13:37.774 --rc genhtml_legend=1 00:13:37.774 --rc geninfo_all_blocks=1 00:13:37.774 --rc geninfo_unexecuted_blocks=1 00:13:37.774 00:13:37.774 ' 00:13:37.774 05:24:09 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.774 --rc genhtml_branch_coverage=1 00:13:37.774 --rc genhtml_function_coverage=1 00:13:37.774 --rc genhtml_legend=1 00:13:37.774 --rc geninfo_all_blocks=1 00:13:37.774 --rc geninfo_unexecuted_blocks=1 00:13:37.774 00:13:37.774 ' 00:13:37.774 05:24:09 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.774 --rc genhtml_branch_coverage=1 00:13:37.774 --rc genhtml_function_coverage=1 00:13:37.774 --rc genhtml_legend=1 00:13:37.774 --rc geninfo_all_blocks=1 00:13:37.774 --rc geninfo_unexecuted_blocks=1 00:13:37.774 00:13:37.774 ' 00:13:37.774 05:24:09 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.774 --rc genhtml_branch_coverage=1 00:13:37.774 --rc genhtml_function_coverage=1 00:13:37.774 --rc genhtml_legend=1 00:13:37.774 --rc geninfo_all_blocks=1 00:13:37.774 --rc geninfo_unexecuted_blocks=1 00:13:37.774 00:13:37.774 ' 00:13:37.774 05:24:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:37.774 05:24:09 -- nvmf/common.sh@7 -- # uname -s 00:13:37.774 05:24:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.774 05:24:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.774 05:24:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.774 05:24:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.774 05:24:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.774 05:24:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.774 05:24:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.774 05:24:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.774 05:24:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.774 05:24:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.774 05:24:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe49e63-e03b-4663-9d3a-018d85cb6e68 00:13:37.774 05:24:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe49e63-e03b-4663-9d3a-018d85cb6e68 00:13:37.774 05:24:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.774 05:24:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.774 05:24:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:37.774 05:24:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.774 05:24:09 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:37.774 05:24:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.774 05:24:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.774 05:24:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.774 05:24:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.774 05:24:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.774 05:24:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.774 05:24:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.774 05:24:09 -- paths/export.sh@5 -- # export PATH 00:13:37.774 05:24:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.774 05:24:09 -- nvmf/common.sh@51 -- # : 0 00:13:37.774 05:24:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:37.774 05:24:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:37.774 05:24:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.774 05:24:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.774 05:24:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.774 05:24:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:37.774 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:37.774 05:24:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:37.774 05:24:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:37.774 05:24:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:37.774 05:24:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:13:37.774 05:24:09 -- spdk/autotest.sh@32 -- # uname -s 00:13:37.774 05:24:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:13:37.774 05:24:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:13:37.774 05:24:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:37.774 05:24:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:13:37.774 05:24:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:37.774 05:24:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:13:37.774 05:24:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:13:37.774 05:24:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:13:37.774 05:24:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:13:37.774 05:24:09 -- spdk/autotest.sh@48 -- # udevadm_pid=53813 00:13:37.774 05:24:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:13:37.774 05:24:09 -- pm/common@17 -- # local monitor 00:13:37.774 05:24:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:37.774 05:24:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:37.774 05:24:09 -- pm/common@25 -- # sleep 1 00:13:37.774 05:24:09 -- pm/common@21 -- # date +%s 00:13:37.774 05:24:09 -- pm/common@21 -- # date +%s 00:13:37.774 05:24:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732080249 00:13:37.774 05:24:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732080249 00:13:37.774 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732080249_collect-cpu-load.pm.log 00:13:37.774 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732080249_collect-vmstat.pm.log 00:13:38.717 05:24:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:13:38.717 05:24:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:13:38.717 05:24:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:38.717 05:24:10 -- common/autotest_common.sh@10 -- # set +x 00:13:38.717 05:24:10 -- spdk/autotest.sh@59 -- # create_test_list 00:13:38.717 05:24:10 -- common/autotest_common.sh@750 -- # xtrace_disable 00:13:38.717 05:24:10 -- common/autotest_common.sh@10 -- # set +x 00:13:38.978 05:24:10 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:13:38.978 05:24:10 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:13:38.978 05:24:10 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:13:38.978 05:24:10 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:13:38.978 05:24:10 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:13:38.978 05:24:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:13:38.978 05:24:10 -- common/autotest_common.sh@1455 -- # uname 00:13:38.978 05:24:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:13:38.979 05:24:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:13:38.979 05:24:10 -- common/autotest_common.sh@1475 -- # uname 00:13:38.979 05:24:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:13:38.979 05:24:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:13:38.979 05:24:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:13:38.979 lcov: LCOV version 1.15 00:13:38.979 05:24:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:13:53.984 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:13:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:14:08.904 05:24:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:14:08.904 05:24:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:08.904 05:24:40 -- common/autotest_common.sh@10 -- # set +x 00:14:08.904 05:24:40 -- spdk/autotest.sh@78 -- # rm -f 00:14:08.904 05:24:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:08.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:08.904 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:08.904 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:08.904 05:24:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:14:08.904 05:24:40 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:08.904 05:24:40 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:08.904 05:24:40 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:08.904 05:24:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:08.904 05:24:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:08.904 05:24:40 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:08.904 05:24:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:08.904 05:24:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:08.904 05:24:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:08.904 05:24:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:08.904 05:24:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:08.904 05:24:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:08.904 05:24:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:08.904 05:24:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:08.904 05:24:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:14:08.904 05:24:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:14:08.904 05:24:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:08.904 05:24:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:08.904 05:24:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:08.904 05:24:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:14:08.904 05:24:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:14:08.904 05:24:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:08.904 05:24:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:08.904 05:24:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:14:08.904 05:24:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:08.904 05:24:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:08.904 05:24:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:14:08.904 05:24:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:08.904 05:24:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:09.166 No valid GPT data, bailing 00:14:09.166 05:24:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:09.166 05:24:40 -- scripts/common.sh@394 -- # pt= 00:14:09.166 05:24:40 -- scripts/common.sh@395 -- # return 1 00:14:09.166 05:24:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:14:09.166 1+0 records in 00:14:09.166 1+0 records out 00:14:09.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587049 s, 179 MB/s 00:14:09.166 05:24:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:09.166 05:24:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:09.166 05:24:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:14:09.166 05:24:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:14:09.166 05:24:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:14:09.166 No valid GPT data, bailing 00:14:09.166 05:24:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:09.166 05:24:40 -- scripts/common.sh@394 -- # pt= 00:14:09.166 05:24:40 -- scripts/common.sh@395 -- # return 1 00:14:09.166 05:24:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:14:09.166 1+0 records in 00:14:09.166 1+0 records out 00:14:09.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00584617 s, 179 MB/s 00:14:09.166 05:24:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:09.166 05:24:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:09.166 05:24:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:14:09.166 05:24:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:14:09.166 05:24:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:14:09.166 No valid GPT data, bailing 00:14:09.166 05:24:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:14:09.166 05:24:40 -- scripts/common.sh@394 -- # pt= 00:14:09.166 05:24:40 -- scripts/common.sh@395 -- # return 1 00:14:09.166 05:24:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:14:09.428 1+0 records in 00:14:09.428 1+0 records out 00:14:09.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00589311 s, 178 MB/s 00:14:09.428 05:24:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:09.428 05:24:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:09.428 05:24:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:14:09.428 05:24:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:14:09.428 05:24:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:14:09.428 No valid GPT data, bailing 00:14:09.428 05:24:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:14:09.428 05:24:41 -- scripts/common.sh@394 -- # pt= 00:14:09.428 05:24:41 -- scripts/common.sh@395 -- # return 1 00:14:09.428 05:24:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:14:09.428 1+0 records in 00:14:09.428 1+0 records out 00:14:09.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065956 s, 159 MB/s 00:14:09.428 05:24:41 -- spdk/autotest.sh@105 -- # sync 00:14:09.428 05:24:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:14:09.428 05:24:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:14:09.428 05:24:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:11.342 05:24:42 -- spdk/autotest.sh@111 -- # uname -s 00:14:11.342 05:24:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:14:11.342 05:24:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:14:11.342 05:24:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:11.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:11.604 Hugepages 00:14:11.604 node hugesize free / total 00:14:11.604 node0 1048576kB 0 / 0 00:14:11.604 node0 2048kB 0 / 0 00:14:11.604 00:14:11.604 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:11.604 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:11.864 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:11.864 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:14:11.864 05:24:43 -- spdk/autotest.sh@117 -- # uname -s 00:14:11.864 05:24:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:14:11.864 05:24:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:14:11.864 05:24:43 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:12.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:12.432 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:12.693 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:12.693 05:24:44 -- common/autotest_common.sh@1515 -- # sleep 1 00:14:13.637 05:24:45 -- common/autotest_common.sh@1516 -- # bdfs=() 00:14:13.637 05:24:45 -- common/autotest_common.sh@1516 -- # local bdfs 00:14:13.637 05:24:45 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:14:13.637 05:24:45 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:14:13.637 05:24:45 -- common/autotest_common.sh@1496 -- # bdfs=() 00:14:13.637 05:24:45 -- common/autotest_common.sh@1496 -- # local bdfs 00:14:13.637 05:24:45 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:13.637 05:24:45 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:13.637 05:24:45 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:14:13.637 05:24:45 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:14:13.637 05:24:45 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:13.637 05:24:45 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:13.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:13.899 Waiting for block devices as requested 00:14:14.160 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.161 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.161 05:24:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:14:14.161 05:24:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:14:14.161 05:24:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:14.161 05:24:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:14:14.161 05:24:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:14.161 05:24:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:14:14.161 05:24:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:14.161 05:24:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:14:14.161 05:24:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:14:14.161 05:24:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:14:14.161 05:24:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:14:14.161 05:24:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:14:14.161 05:24:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:14:14.161 05:24:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:14:14.161 05:24:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:14:14.161 05:24:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:14:14.161 05:24:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:14:14.161 05:24:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:14:14.161 05:24:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:14:14.161 05:24:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:14:14.161 05:24:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:14:14.161 05:24:45 -- common/autotest_common.sh@1541 -- # continue 00:14:14.161 05:24:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:14:14.161 05:24:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:14:14.161 05:24:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:14.161 05:24:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:14:14.161 05:24:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:14.161 05:24:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:14:14.161 05:24:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:14.161 05:24:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:14:14.161 05:24:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:14:14.161 05:24:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:14:14.161 05:24:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:14:14.161 05:24:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:14:14.161 05:24:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:14:14.161 05:24:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:14:14.161 05:24:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:14:14.161 05:24:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:14:14.161 05:24:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:14:14.161 05:24:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:14:14.161 05:24:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:14:14.161 05:24:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:14:14.161 05:24:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:14:14.161 05:24:45 -- common/autotest_common.sh@1541 -- # continue 00:14:14.161 05:24:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:14:14.161 05:24:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:14.161 05:24:45 -- common/autotest_common.sh@10 -- # set +x 00:14:14.422 05:24:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:14:14.422 05:24:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.422 05:24:46 -- common/autotest_common.sh@10 -- # set +x 00:14:14.422 05:24:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:15.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:15.053 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:15.053 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:15.314 05:24:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:14:15.314 05:24:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.314 05:24:46 -- common/autotest_common.sh@10 -- # set +x 00:14:15.314 05:24:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:14:15.314 05:24:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:14:15.314 05:24:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:14:15.314 05:24:46 -- common/autotest_common.sh@1561 -- # bdfs=() 00:14:15.314 05:24:46 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:14:15.314 05:24:46 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:14:15.314 05:24:46 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:14:15.314 05:24:46 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:14:15.314 05:24:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:14:15.314 05:24:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:14:15.314 05:24:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:15.314 05:24:46 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:15.314 05:24:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:14:15.314 05:24:47 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:14:15.314 05:24:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:15.314 05:24:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:14:15.314 05:24:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:15.314 05:24:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:14:15.314 05:24:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:15.314 05:24:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:14:15.314 05:24:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:14:15.314 05:24:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:14:15.314 05:24:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:15.314 05:24:47 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:14:15.314 05:24:47 -- common/autotest_common.sh@1570 -- # return 0 00:14:15.314 05:24:47 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:14:15.314 05:24:47 -- common/autotest_common.sh@1578 -- # return 0 00:14:15.314 05:24:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:14:15.314 05:24:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:14:15.314 05:24:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:14:15.314 05:24:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:14:15.314 05:24:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:14:15.314 05:24:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:15.314 05:24:47 -- common/autotest_common.sh@10 -- # set +x 00:14:15.314 05:24:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:14:15.314 05:24:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:15.314 05:24:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:15.314 05:24:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:15.314 05:24:47 -- common/autotest_common.sh@10 -- # set +x 00:14:15.314 ************************************ 00:14:15.314 START TEST env 00:14:15.314 ************************************ 00:14:15.314 05:24:47 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:15.315 * Looking for test storage... 00:14:15.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:14:15.315 05:24:47 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:15.315 05:24:47 env -- common/autotest_common.sh@1691 -- # lcov --version 00:14:15.315 05:24:47 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:15.576 05:24:47 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:15.576 05:24:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.576 05:24:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.576 05:24:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.576 05:24:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.576 05:24:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.576 05:24:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.576 05:24:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.576 05:24:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.576 05:24:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.576 05:24:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.576 05:24:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.576 05:24:47 env -- scripts/common.sh@344 -- # case "$op" in 00:14:15.576 05:24:47 env -- scripts/common.sh@345 -- # : 1 00:14:15.576 05:24:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.576 05:24:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.576 05:24:47 env -- scripts/common.sh@365 -- # decimal 1 00:14:15.576 05:24:47 env -- scripts/common.sh@353 -- # local d=1 00:14:15.576 05:24:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.576 05:24:47 env -- scripts/common.sh@355 -- # echo 1 00:14:15.576 05:24:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.576 05:24:47 env -- scripts/common.sh@366 -- # decimal 2 00:14:15.576 05:24:47 env -- scripts/common.sh@353 -- # local d=2 00:14:15.576 05:24:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.576 05:24:47 env -- scripts/common.sh@355 -- # echo 2 00:14:15.576 05:24:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.576 05:24:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.576 05:24:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.576 05:24:47 env -- scripts/common.sh@368 -- # return 0 00:14:15.576 05:24:47 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.576 05:24:47 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:15.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.576 --rc genhtml_branch_coverage=1 00:14:15.576 --rc genhtml_function_coverage=1 00:14:15.576 --rc genhtml_legend=1 00:14:15.576 --rc geninfo_all_blocks=1 00:14:15.576 --rc geninfo_unexecuted_blocks=1 00:14:15.576 00:14:15.576 ' 00:14:15.576 05:24:47 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:15.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.576 --rc genhtml_branch_coverage=1 00:14:15.576 --rc genhtml_function_coverage=1 00:14:15.576 --rc genhtml_legend=1 00:14:15.576 --rc geninfo_all_blocks=1 00:14:15.576 --rc geninfo_unexecuted_blocks=1 00:14:15.576 00:14:15.576 ' 00:14:15.576 05:24:47 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:15.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.576 --rc genhtml_branch_coverage=1 00:14:15.577 --rc genhtml_function_coverage=1 00:14:15.577 --rc genhtml_legend=1 00:14:15.577 --rc geninfo_all_blocks=1 00:14:15.577 --rc geninfo_unexecuted_blocks=1 00:14:15.577 00:14:15.577 ' 00:14:15.577 05:24:47 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:15.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.577 --rc genhtml_branch_coverage=1 00:14:15.577 --rc genhtml_function_coverage=1 00:14:15.577 --rc genhtml_legend=1 00:14:15.577 --rc geninfo_all_blocks=1 00:14:15.577 --rc geninfo_unexecuted_blocks=1 00:14:15.577 00:14:15.577 ' 00:14:15.577 05:24:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:15.577 05:24:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:15.577 05:24:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:15.577 05:24:47 env -- common/autotest_common.sh@10 -- # set +x 00:14:15.577 ************************************ 00:14:15.577 START TEST env_memory 00:14:15.577 ************************************ 00:14:15.577 05:24:47 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:15.577 00:14:15.577 00:14:15.577 CUnit - A unit testing framework for C - Version 2.1-3 00:14:15.577 http://cunit.sourceforge.net/ 00:14:15.577 00:14:15.577 00:14:15.577 Suite: memory 00:14:15.577 Test: alloc and free memory map ...[2024-11-20 05:24:47.294168] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:14:15.577 passed 00:14:15.577 Test: mem map translation ...[2024-11-20 05:24:47.369789] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:14:15.577 [2024-11-20 05:24:47.370077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:14:15.577 [2024-11-20 05:24:47.370320] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:14:15.577 [2024-11-20 05:24:47.370679] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:14:15.839 passed 00:14:15.839 Test: mem map registration ...[2024-11-20 05:24:47.441333] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:14:15.839 [2024-11-20 05:24:47.441489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:14:15.839 passed 00:14:15.839 Test: mem map adjacent registrations ...passed 00:14:15.839 00:14:15.839 Run Summary: Type Total Ran Passed Failed Inactive 00:14:15.839 suites 1 1 n/a 0 0 00:14:15.839 tests 4 4 4 0 0 00:14:15.839 asserts 152 152 152 0 n/a 00:14:15.839 00:14:15.839 Elapsed time = 0.295 seconds 00:14:15.839 00:14:15.839 real 0m0.330s 00:14:15.839 user 0m0.299s 00:14:15.839 sys 0m0.021s 00:14:15.839 05:24:47 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:15.839 05:24:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:14:15.839 ************************************ 00:14:15.839 END TEST env_memory 00:14:15.839 ************************************ 00:14:15.839 05:24:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:15.839 05:24:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:15.839 05:24:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:15.839 05:24:47 env -- common/autotest_common.sh@10 -- # set +x 00:14:15.839 ************************************ 00:14:15.839 START TEST env_vtophys 00:14:15.839 ************************************ 00:14:15.839 05:24:47 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:15.839 EAL: lib.eal log level changed from notice to debug 00:14:15.839 EAL: Detected lcore 0 as core 0 on socket 0 00:14:15.839 EAL: Detected lcore 1 as core 0 on socket 0 00:14:15.839 EAL: Detected lcore 2 as core 0 on socket 0 00:14:15.839 EAL: Detected lcore 3 as core 0 on socket 0 00:14:15.839 EAL: Detected lcore 4 as core 0 on socket 0 00:14:15.839 EAL: Detected lcore 5 as core 0 on socket 0 00:14:15.839 EAL: Detected lcore 6 as core 0 on socket 0 00:14:15.839 EAL: Detected lcore 7 as core 0 on socket 0 00:14:15.839 EAL: Detected lcore 8 as core 0 on socket 0 00:14:15.839 EAL: Detected lcore 9 as core 0 on socket 0 00:14:15.839 EAL: Maximum logical cores by configuration: 128 00:14:15.839 EAL: Detected CPU lcores: 10 00:14:15.839 EAL: Detected NUMA nodes: 1 00:14:15.839 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:14:15.839 EAL: Detected shared linkage of DPDK 00:14:15.839 EAL: No shared files mode enabled, IPC will be disabled 00:14:15.839 EAL: Selected IOVA mode 'PA' 00:14:15.839 EAL: Probing VFIO support... 00:14:15.839 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:15.839 EAL: VFIO modules not loaded, skipping VFIO support... 00:14:15.839 EAL: Ask a virtual area of 0x2e000 bytes 00:14:15.839 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:14:15.839 EAL: Setting up physically contiguous memory... 00:14:15.839 EAL: Setting maximum number of open files to 524288 00:14:15.839 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:14:15.839 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:14:15.839 EAL: Ask a virtual area of 0x61000 bytes 00:14:15.839 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:14:15.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:15.839 EAL: Ask a virtual area of 0x400000000 bytes 00:14:15.839 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:14:15.839 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:14:15.839 EAL: Ask a virtual area of 0x61000 bytes 00:14:15.839 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:14:15.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:15.839 EAL: Ask a virtual area of 0x400000000 bytes 00:14:15.839 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:14:15.839 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:14:15.839 EAL: Ask a virtual area of 0x61000 bytes 00:14:15.839 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:14:15.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:15.839 EAL: Ask a virtual area of 0x400000000 bytes 00:14:15.839 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:14:15.839 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:14:15.839 EAL: Ask a virtual area of 0x61000 bytes 00:14:15.839 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:14:15.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:15.839 EAL: Ask a virtual area of 0x400000000 bytes 00:14:15.839 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:14:15.839 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:14:15.839 EAL: Hugepages will be freed exactly as allocated. 00:14:15.839 EAL: No shared files mode enabled, IPC is disabled 00:14:15.839 EAL: No shared files mode enabled, IPC is disabled 00:14:16.101 EAL: TSC frequency is ~2600000 KHz 00:14:16.101 EAL: Main lcore 0 is ready (tid=7fd8e1e11a40;cpuset=[0]) 00:14:16.101 EAL: Trying to obtain current memory policy. 00:14:16.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:16.101 EAL: Restoring previous memory policy: 0 00:14:16.101 EAL: request: mp_malloc_sync 00:14:16.101 EAL: No shared files mode enabled, IPC is disabled 00:14:16.101 EAL: Heap on socket 0 was expanded by 2MB 00:14:16.101 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:16.101 EAL: No PCI address specified using 'addr=' in: bus=pci 00:14:16.101 EAL: Mem event callback 'spdk:(nil)' registered 00:14:16.101 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:14:16.101 00:14:16.101 00:14:16.101 CUnit - A unit testing framework for C - Version 2.1-3 00:14:16.101 http://cunit.sourceforge.net/ 00:14:16.101 00:14:16.101 00:14:16.101 Suite: components_suite 00:14:16.362 Test: vtophys_malloc_test ...passed 00:14:16.362 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:14:16.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:16.362 EAL: Restoring previous memory policy: 4 00:14:16.362 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.362 EAL: request: mp_malloc_sync 00:14:16.362 EAL: No shared files mode enabled, IPC is disabled 00:14:16.362 EAL: Heap on socket 0 was expanded by 4MB 00:14:16.362 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.362 EAL: request: mp_malloc_sync 00:14:16.362 EAL: No shared files mode enabled, IPC is disabled 00:14:16.362 EAL: Heap on socket 0 was shrunk by 4MB 00:14:16.362 EAL: Trying to obtain current memory policy. 00:14:16.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:16.362 EAL: Restoring previous memory policy: 4 00:14:16.362 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.362 EAL: request: mp_malloc_sync 00:14:16.362 EAL: No shared files mode enabled, IPC is disabled 00:14:16.362 EAL: Heap on socket 0 was expanded by 6MB 00:14:16.362 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.362 EAL: request: mp_malloc_sync 00:14:16.362 EAL: No shared files mode enabled, IPC is disabled 00:14:16.362 EAL: Heap on socket 0 was shrunk by 6MB 00:14:16.362 EAL: Trying to obtain current memory policy. 00:14:16.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:16.362 EAL: Restoring previous memory policy: 4 00:14:16.362 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.362 EAL: request: mp_malloc_sync 00:14:16.362 EAL: No shared files mode enabled, IPC is disabled 00:14:16.362 EAL: Heap on socket 0 was expanded by 10MB 00:14:16.362 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.362 EAL: request: mp_malloc_sync 00:14:16.362 EAL: No shared files mode enabled, IPC is disabled 00:14:16.362 EAL: Heap on socket 0 was shrunk by 10MB 00:14:16.622 EAL: Trying to obtain current memory policy. 00:14:16.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:16.622 EAL: Restoring previous memory policy: 4 00:14:16.622 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.622 EAL: request: mp_malloc_sync 00:14:16.622 EAL: No shared files mode enabled, IPC is disabled 00:14:16.622 EAL: Heap on socket 0 was expanded by 18MB 00:14:16.622 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.622 EAL: request: mp_malloc_sync 00:14:16.622 EAL: No shared files mode enabled, IPC is disabled 00:14:16.622 EAL: Heap on socket 0 was shrunk by 18MB 00:14:16.622 EAL: Trying to obtain current memory policy. 00:14:16.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:16.622 EAL: Restoring previous memory policy: 4 00:14:16.622 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.622 EAL: request: mp_malloc_sync 00:14:16.622 EAL: No shared files mode enabled, IPC is disabled 00:14:16.622 EAL: Heap on socket 0 was expanded by 34MB 00:14:16.622 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.622 EAL: request: mp_malloc_sync 00:14:16.622 EAL: No shared files mode enabled, IPC is disabled 00:14:16.622 EAL: Heap on socket 0 was shrunk by 34MB 00:14:16.622 EAL: Trying to obtain current memory policy. 00:14:16.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:16.622 EAL: Restoring previous memory policy: 4 00:14:16.622 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.622 EAL: request: mp_malloc_sync 00:14:16.622 EAL: No shared files mode enabled, IPC is disabled 00:14:16.622 EAL: Heap on socket 0 was expanded by 66MB 00:14:16.622 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.622 EAL: request: mp_malloc_sync 00:14:16.622 EAL: No shared files mode enabled, IPC is disabled 00:14:16.622 EAL: Heap on socket 0 was shrunk by 66MB 00:14:16.883 EAL: Trying to obtain current memory policy. 00:14:16.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:16.883 EAL: Restoring previous memory policy: 4 00:14:16.883 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.883 EAL: request: mp_malloc_sync 00:14:16.883 EAL: No shared files mode enabled, IPC is disabled 00:14:16.883 EAL: Heap on socket 0 was expanded by 130MB 00:14:16.883 EAL: Calling mem event callback 'spdk:(nil)' 00:14:16.883 EAL: request: mp_malloc_sync 00:14:16.883 EAL: No shared files mode enabled, IPC is disabled 00:14:16.883 EAL: Heap on socket 0 was shrunk by 130MB 00:14:17.143 EAL: Trying to obtain current memory policy. 00:14:17.143 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:17.143 EAL: Restoring previous memory policy: 4 00:14:17.143 EAL: Calling mem event callback 'spdk:(nil)' 00:14:17.143 EAL: request: mp_malloc_sync 00:14:17.143 EAL: No shared files mode enabled, IPC is disabled 00:14:17.143 EAL: Heap on socket 0 was expanded by 258MB 00:14:17.404 EAL: Calling mem event callback 'spdk:(nil)' 00:14:17.404 EAL: request: mp_malloc_sync 00:14:17.404 EAL: No shared files mode enabled, IPC is disabled 00:14:17.404 EAL: Heap on socket 0 was shrunk by 258MB 00:14:17.699 EAL: Trying to obtain current memory policy. 00:14:17.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:17.961 EAL: Restoring previous memory policy: 4 00:14:17.961 EAL: Calling mem event callback 'spdk:(nil)' 00:14:17.961 EAL: request: mp_malloc_sync 00:14:17.961 EAL: No shared files mode enabled, IPC is disabled 00:14:17.961 EAL: Heap on socket 0 was expanded by 514MB 00:14:18.535 EAL: Calling mem event callback 'spdk:(nil)' 00:14:18.535 EAL: request: mp_malloc_sync 00:14:18.536 EAL: No shared files mode enabled, IPC is disabled 00:14:18.536 EAL: Heap on socket 0 was shrunk by 514MB 00:14:19.183 EAL: Trying to obtain current memory policy. 00:14:19.183 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:19.453 EAL: Restoring previous memory policy: 4 00:14:19.453 EAL: Calling mem event callback 'spdk:(nil)' 00:14:19.453 EAL: request: mp_malloc_sync 00:14:19.453 EAL: No shared files mode enabled, IPC is disabled 00:14:19.453 EAL: Heap on socket 0 was expanded by 1026MB 00:14:20.852 EAL: Calling mem event callback 'spdk:(nil)' 00:14:20.852 EAL: request: mp_malloc_sync 00:14:20.852 EAL: No shared files mode enabled, IPC is disabled 00:14:20.852 EAL: Heap on socket 0 was shrunk by 1026MB 00:14:21.794 passed 00:14:21.795 00:14:21.795 Run Summary: Type Total Ran Passed Failed Inactive 00:14:21.795 suites 1 1 n/a 0 0 00:14:21.795 tests 2 2 2 0 0 00:14:21.795 asserts 5740 5740 5740 0 n/a 00:14:21.795 00:14:21.795 Elapsed time = 5.666 seconds 00:14:21.795 EAL: Calling mem event callback 'spdk:(nil)' 00:14:21.795 EAL: request: mp_malloc_sync 00:14:21.795 EAL: No shared files mode enabled, IPC is disabled 00:14:21.795 EAL: Heap on socket 0 was shrunk by 2MB 00:14:21.795 EAL: No shared files mode enabled, IPC is disabled 00:14:21.795 EAL: No shared files mode enabled, IPC is disabled 00:14:21.795 EAL: No shared files mode enabled, IPC is disabled 00:14:21.795 00:14:21.795 real 0m5.943s 00:14:21.795 user 0m4.953s 00:14:21.795 sys 0m0.835s 00:14:21.795 05:24:53 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:21.795 05:24:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:14:21.795 ************************************ 00:14:21.795 END TEST env_vtophys 00:14:21.795 ************************************ 00:14:21.795 05:24:53 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:21.795 05:24:53 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:21.795 05:24:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:21.795 05:24:53 env -- common/autotest_common.sh@10 -- # set +x 00:14:21.795 ************************************ 00:14:21.795 START TEST env_pci 00:14:21.795 ************************************ 00:14:21.795 05:24:53 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:22.057 00:14:22.057 00:14:22.057 CUnit - A unit testing framework for C - Version 2.1-3 00:14:22.057 http://cunit.sourceforge.net/ 00:14:22.057 00:14:22.057 00:14:22.057 Suite: pci 00:14:22.057 Test: pci_hook ...[2024-11-20 05:24:53.636106] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56089 has claimed it 00:14:22.057 EAL: Cannot find device (10000:00:01.0) 00:14:22.057 EAL: Failed to attach device on primary process 00:14:22.057 passed 00:14:22.057 00:14:22.057 Run Summary: Type Total Ran Passed Failed Inactive 00:14:22.057 suites 1 1 n/a 0 0 00:14:22.057 tests 1 1 1 0 0 00:14:22.057 asserts 25 25 25 0 n/a 00:14:22.057 00:14:22.057 Elapsed time = 0.007 seconds 00:14:22.057 00:14:22.057 real 0m0.070s 00:14:22.057 user 0m0.033s 00:14:22.057 sys 0m0.036s 00:14:22.057 05:24:53 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:22.057 ************************************ 00:14:22.057 END TEST env_pci 00:14:22.057 ************************************ 00:14:22.057 05:24:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:14:22.057 05:24:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:14:22.057 05:24:53 env -- env/env.sh@15 -- # uname 00:14:22.057 05:24:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:14:22.057 05:24:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:14:22.057 05:24:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:22.057 05:24:53 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:22.057 05:24:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:22.057 05:24:53 env -- common/autotest_common.sh@10 -- # set +x 00:14:22.057 ************************************ 00:14:22.057 START TEST env_dpdk_post_init 00:14:22.057 ************************************ 00:14:22.057 05:24:53 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:22.057 EAL: Detected CPU lcores: 10 00:14:22.057 EAL: Detected NUMA nodes: 1 00:14:22.057 EAL: Detected shared linkage of DPDK 00:14:22.057 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:22.057 EAL: Selected IOVA mode 'PA' 00:14:22.317 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:22.317 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:14:22.317 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:14:22.317 Starting DPDK initialization... 00:14:22.317 Starting SPDK post initialization... 00:14:22.317 SPDK NVMe probe 00:14:22.317 Attaching to 0000:00:10.0 00:14:22.317 Attaching to 0000:00:11.0 00:14:22.317 Attached to 0000:00:10.0 00:14:22.317 Attached to 0000:00:11.0 00:14:22.317 Cleaning up... 00:14:22.317 00:14:22.317 real 0m0.235s 00:14:22.317 user 0m0.070s 00:14:22.317 sys 0m0.067s 00:14:22.317 05:24:53 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:22.317 ************************************ 00:14:22.317 END TEST env_dpdk_post_init 00:14:22.317 ************************************ 00:14:22.317 05:24:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:14:22.317 05:24:54 env -- env/env.sh@26 -- # uname 00:14:22.317 05:24:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:14:22.317 05:24:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:22.317 05:24:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:22.317 05:24:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:22.317 05:24:54 env -- common/autotest_common.sh@10 -- # set +x 00:14:22.317 ************************************ 00:14:22.317 START TEST env_mem_callbacks 00:14:22.317 ************************************ 00:14:22.317 05:24:54 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:22.318 EAL: Detected CPU lcores: 10 00:14:22.318 EAL: Detected NUMA nodes: 1 00:14:22.318 EAL: Detected shared linkage of DPDK 00:14:22.318 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:22.318 EAL: Selected IOVA mode 'PA' 00:14:22.579 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:22.579 00:14:22.579 00:14:22.579 CUnit - A unit testing framework for C - Version 2.1-3 00:14:22.579 http://cunit.sourceforge.net/ 00:14:22.579 00:14:22.579 00:14:22.579 Suite: memory 00:14:22.579 Test: test ... 00:14:22.579 register 0x200000200000 2097152 00:14:22.579 malloc 3145728 00:14:22.579 register 0x200000400000 4194304 00:14:22.579 buf 0x2000004fffc0 len 3145728 PASSED 00:14:22.579 malloc 64 00:14:22.579 buf 0x2000004ffec0 len 64 PASSED 00:14:22.579 malloc 4194304 00:14:22.579 register 0x200000800000 6291456 00:14:22.579 buf 0x2000009fffc0 len 4194304 PASSED 00:14:22.579 free 0x2000004fffc0 3145728 00:14:22.579 free 0x2000004ffec0 64 00:14:22.579 unregister 0x200000400000 4194304 PASSED 00:14:22.579 free 0x2000009fffc0 4194304 00:14:22.579 unregister 0x200000800000 6291456 PASSED 00:14:22.579 malloc 8388608 00:14:22.579 register 0x200000400000 10485760 00:14:22.579 buf 0x2000005fffc0 len 8388608 PASSED 00:14:22.579 free 0x2000005fffc0 8388608 00:14:22.579 unregister 0x200000400000 10485760 PASSED 00:14:22.579 passed 00:14:22.579 00:14:22.579 Run Summary: Type Total Ran Passed Failed Inactive 00:14:22.579 suites 1 1 n/a 0 0 00:14:22.579 tests 1 1 1 0 0 00:14:22.579 asserts 15 15 15 0 n/a 00:14:22.579 00:14:22.579 Elapsed time = 0.045 seconds 00:14:22.579 ************************************ 00:14:22.579 END TEST env_mem_callbacks 00:14:22.579 ************************************ 00:14:22.579 00:14:22.579 real 0m0.217s 00:14:22.579 user 0m0.064s 00:14:22.579 sys 0m0.050s 00:14:22.579 05:24:54 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:22.579 05:24:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:14:22.579 00:14:22.579 real 0m7.221s 00:14:22.579 user 0m5.559s 00:14:22.579 sys 0m1.242s 00:14:22.579 ************************************ 00:14:22.579 END TEST env 00:14:22.579 ************************************ 00:14:22.579 05:24:54 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:22.579 05:24:54 env -- common/autotest_common.sh@10 -- # set +x 00:14:22.579 05:24:54 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:22.579 05:24:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:22.579 05:24:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:22.579 05:24:54 -- common/autotest_common.sh@10 -- # set +x 00:14:22.579 ************************************ 00:14:22.579 START TEST rpc 00:14:22.579 ************************************ 00:14:22.579 05:24:54 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:22.579 * Looking for test storage... 00:14:22.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:22.579 05:24:54 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:22.579 05:24:54 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:14:22.579 05:24:54 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:22.841 05:24:54 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:22.841 05:24:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.841 05:24:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.841 05:24:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.841 05:24:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.841 05:24:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.841 05:24:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.841 05:24:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.841 05:24:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.841 05:24:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.841 05:24:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.841 05:24:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.841 05:24:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:22.841 05:24:54 rpc -- scripts/common.sh@345 -- # : 1 00:14:22.841 05:24:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.841 05:24:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.841 05:24:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:14:22.841 05:24:54 rpc -- scripts/common.sh@353 -- # local d=1 00:14:22.841 05:24:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.841 05:24:54 rpc -- scripts/common.sh@355 -- # echo 1 00:14:22.841 05:24:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.841 05:24:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:14:22.841 05:24:54 rpc -- scripts/common.sh@353 -- # local d=2 00:14:22.841 05:24:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.841 05:24:54 rpc -- scripts/common.sh@355 -- # echo 2 00:14:22.841 05:24:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.841 05:24:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.841 05:24:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.841 05:24:54 rpc -- scripts/common.sh@368 -- # return 0 00:14:22.841 05:24:54 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.841 05:24:54 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:22.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.841 --rc genhtml_branch_coverage=1 00:14:22.841 --rc genhtml_function_coverage=1 00:14:22.841 --rc genhtml_legend=1 00:14:22.841 --rc geninfo_all_blocks=1 00:14:22.842 --rc geninfo_unexecuted_blocks=1 00:14:22.842 00:14:22.842 ' 00:14:22.842 05:24:54 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.842 --rc genhtml_branch_coverage=1 00:14:22.842 --rc genhtml_function_coverage=1 00:14:22.842 --rc genhtml_legend=1 00:14:22.842 --rc geninfo_all_blocks=1 00:14:22.842 --rc geninfo_unexecuted_blocks=1 00:14:22.842 00:14:22.842 ' 00:14:22.842 05:24:54 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.842 --rc genhtml_branch_coverage=1 00:14:22.842 --rc genhtml_function_coverage=1 00:14:22.842 --rc genhtml_legend=1 00:14:22.842 --rc geninfo_all_blocks=1 00:14:22.842 --rc geninfo_unexecuted_blocks=1 00:14:22.842 00:14:22.842 ' 00:14:22.842 05:24:54 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.842 --rc genhtml_branch_coverage=1 00:14:22.842 --rc genhtml_function_coverage=1 00:14:22.842 --rc genhtml_legend=1 00:14:22.842 --rc geninfo_all_blocks=1 00:14:22.842 --rc geninfo_unexecuted_blocks=1 00:14:22.842 00:14:22.842 ' 00:14:22.842 05:24:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56210 00:14:22.842 05:24:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:22.842 05:24:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56210 00:14:22.842 05:24:54 rpc -- common/autotest_common.sh@833 -- # '[' -z 56210 ']' 00:14:22.842 05:24:54 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.842 05:24:54 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:22.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.842 05:24:54 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.842 05:24:54 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:22.842 05:24:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.842 05:24:54 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:14:22.842 [2024-11-20 05:24:54.546645] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:22.842 [2024-11-20 05:24:54.546772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56210 ] 00:14:23.104 [2024-11-20 05:24:54.705198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.104 [2024-11-20 05:24:54.824308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:14:23.104 [2024-11-20 05:24:54.824391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56210' to capture a snapshot of events at runtime. 00:14:23.104 [2024-11-20 05:24:54.824403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.104 [2024-11-20 05:24:54.824415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.104 [2024-11-20 05:24:54.824424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56210 for offline analysis/debug. 00:14:23.104 [2024-11-20 05:24:54.825306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.678 05:24:55 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:23.678 05:24:55 rpc -- common/autotest_common.sh@866 -- # return 0 00:14:23.678 05:24:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:23.678 05:24:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:23.679 05:24:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:14:23.679 05:24:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:14:23.679 05:24:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:23.679 05:24:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:23.679 05:24:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.679 ************************************ 00:14:23.679 START TEST rpc_integrity 00:14:23.679 ************************************ 00:14:23.679 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:14:23.679 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:23.679 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.679 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:23.679 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.679 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:23.679 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:23.940 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:23.940 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:23.940 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.940 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:23.940 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.940 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:14:23.940 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:23.940 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.940 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:23.940 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.940 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:23.940 { 00:14:23.940 "name": "Malloc0", 00:14:23.940 "aliases": [ 00:14:23.940 "bf879960-b413-429e-bfb2-56624de1f9d8" 00:14:23.940 ], 00:14:23.940 "product_name": "Malloc disk", 00:14:23.940 "block_size": 512, 00:14:23.940 "num_blocks": 16384, 00:14:23.940 "uuid": "bf879960-b413-429e-bfb2-56624de1f9d8", 00:14:23.940 "assigned_rate_limits": { 00:14:23.940 "rw_ios_per_sec": 0, 00:14:23.940 "rw_mbytes_per_sec": 0, 00:14:23.940 "r_mbytes_per_sec": 0, 00:14:23.940 "w_mbytes_per_sec": 0 00:14:23.940 }, 00:14:23.940 "claimed": false, 00:14:23.940 "zoned": false, 00:14:23.940 "supported_io_types": { 00:14:23.940 "read": true, 00:14:23.940 "write": true, 00:14:23.940 "unmap": true, 00:14:23.940 "flush": true, 00:14:23.940 "reset": true, 00:14:23.940 "nvme_admin": false, 00:14:23.941 "nvme_io": false, 00:14:23.941 "nvme_io_md": false, 00:14:23.941 "write_zeroes": true, 00:14:23.941 "zcopy": true, 00:14:23.941 "get_zone_info": false, 00:14:23.941 "zone_management": false, 00:14:23.941 "zone_append": false, 00:14:23.941 "compare": false, 00:14:23.941 "compare_and_write": false, 00:14:23.941 "abort": true, 00:14:23.941 "seek_hole": false, 00:14:23.941 "seek_data": false, 00:14:23.941 "copy": true, 00:14:23.941 "nvme_iov_md": false 00:14:23.941 }, 00:14:23.941 "memory_domains": [ 00:14:23.941 { 00:14:23.941 "dma_device_id": "system", 00:14:23.941 "dma_device_type": 1 00:14:23.941 }, 00:14:23.941 { 00:14:23.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.941 "dma_device_type": 2 00:14:23.941 } 00:14:23.941 ], 00:14:23.941 "driver_specific": {} 00:14:23.941 } 00:14:23.941 ]' 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:23.941 [2024-11-20 05:24:55.604096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:14:23.941 [2024-11-20 05:24:55.604163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.941 [2024-11-20 05:24:55.604189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:23.941 [2024-11-20 05:24:55.604204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.941 [2024-11-20 05:24:55.606615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.941 [2024-11-20 05:24:55.606655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:23.941 Passthru0 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:23.941 { 00:14:23.941 "name": "Malloc0", 00:14:23.941 "aliases": [ 00:14:23.941 "bf879960-b413-429e-bfb2-56624de1f9d8" 00:14:23.941 ], 00:14:23.941 "product_name": "Malloc disk", 00:14:23.941 "block_size": 512, 00:14:23.941 "num_blocks": 16384, 00:14:23.941 "uuid": "bf879960-b413-429e-bfb2-56624de1f9d8", 00:14:23.941 "assigned_rate_limits": { 00:14:23.941 "rw_ios_per_sec": 0, 00:14:23.941 "rw_mbytes_per_sec": 0, 00:14:23.941 "r_mbytes_per_sec": 0, 00:14:23.941 "w_mbytes_per_sec": 0 00:14:23.941 }, 00:14:23.941 "claimed": true, 00:14:23.941 "claim_type": "exclusive_write", 00:14:23.941 "zoned": false, 00:14:23.941 "supported_io_types": { 00:14:23.941 "read": true, 00:14:23.941 "write": true, 00:14:23.941 "unmap": true, 00:14:23.941 "flush": true, 00:14:23.941 "reset": true, 00:14:23.941 "nvme_admin": false, 00:14:23.941 "nvme_io": false, 00:14:23.941 "nvme_io_md": false, 00:14:23.941 "write_zeroes": true, 00:14:23.941 "zcopy": true, 00:14:23.941 "get_zone_info": false, 00:14:23.941 "zone_management": false, 00:14:23.941 "zone_append": false, 00:14:23.941 "compare": false, 00:14:23.941 "compare_and_write": false, 00:14:23.941 "abort": true, 00:14:23.941 "seek_hole": false, 00:14:23.941 "seek_data": false, 00:14:23.941 "copy": true, 00:14:23.941 "nvme_iov_md": false 00:14:23.941 }, 00:14:23.941 "memory_domains": [ 00:14:23.941 { 00:14:23.941 "dma_device_id": "system", 00:14:23.941 "dma_device_type": 1 00:14:23.941 }, 00:14:23.941 { 00:14:23.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.941 "dma_device_type": 2 00:14:23.941 } 00:14:23.941 ], 00:14:23.941 "driver_specific": {} 00:14:23.941 }, 00:14:23.941 { 00:14:23.941 "name": "Passthru0", 00:14:23.941 "aliases": [ 00:14:23.941 "caf29674-7086-510f-a52e-6a4fc07ad239" 00:14:23.941 ], 00:14:23.941 "product_name": "passthru", 00:14:23.941 "block_size": 512, 00:14:23.941 "num_blocks": 16384, 00:14:23.941 "uuid": "caf29674-7086-510f-a52e-6a4fc07ad239", 00:14:23.941 "assigned_rate_limits": { 00:14:23.941 "rw_ios_per_sec": 0, 00:14:23.941 "rw_mbytes_per_sec": 0, 00:14:23.941 "r_mbytes_per_sec": 0, 00:14:23.941 "w_mbytes_per_sec": 0 00:14:23.941 }, 00:14:23.941 "claimed": false, 00:14:23.941 "zoned": false, 00:14:23.941 "supported_io_types": { 00:14:23.941 "read": true, 00:14:23.941 "write": true, 00:14:23.941 "unmap": true, 00:14:23.941 "flush": true, 00:14:23.941 "reset": true, 00:14:23.941 "nvme_admin": false, 00:14:23.941 "nvme_io": false, 00:14:23.941 "nvme_io_md": false, 00:14:23.941 "write_zeroes": true, 00:14:23.941 "zcopy": true, 00:14:23.941 "get_zone_info": false, 00:14:23.941 "zone_management": false, 00:14:23.941 "zone_append": false, 00:14:23.941 "compare": false, 00:14:23.941 "compare_and_write": false, 00:14:23.941 "abort": true, 00:14:23.941 "seek_hole": false, 00:14:23.941 "seek_data": false, 00:14:23.941 "copy": true, 00:14:23.941 "nvme_iov_md": false 00:14:23.941 }, 00:14:23.941 "memory_domains": [ 00:14:23.941 { 00:14:23.941 "dma_device_id": "system", 00:14:23.941 "dma_device_type": 1 00:14:23.941 }, 00:14:23.941 { 00:14:23.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.941 "dma_device_type": 2 00:14:23.941 } 00:14:23.941 ], 00:14:23.941 "driver_specific": { 00:14:23.941 "passthru": { 00:14:23.941 "name": "Passthru0", 00:14:23.941 "base_bdev_name": "Malloc0" 00:14:23.941 } 00:14:23.941 } 00:14:23.941 } 00:14:23.941 ]' 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:23.941 05:24:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:23.941 00:14:23.941 real 0m0.249s 00:14:23.941 user 0m0.126s 00:14:23.941 sys 0m0.033s 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:23.941 ************************************ 00:14:23.941 05:24:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:23.941 END TEST rpc_integrity 00:14:23.941 ************************************ 00:14:23.941 05:24:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:14:23.941 05:24:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:23.941 05:24:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:23.941 05:24:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.204 ************************************ 00:14:24.204 START TEST rpc_plugins 00:14:24.204 ************************************ 00:14:24.204 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:14:24.204 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:14:24.204 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.204 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:24.204 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.204 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:14:24.204 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:14:24.204 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.204 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:24.204 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.204 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:14:24.205 { 00:14:24.205 "name": "Malloc1", 00:14:24.205 "aliases": [ 00:14:24.205 "54973bd8-9b30-4a74-9c75-f362967b10c1" 00:14:24.205 ], 00:14:24.205 "product_name": "Malloc disk", 00:14:24.205 "block_size": 4096, 00:14:24.205 "num_blocks": 256, 00:14:24.205 "uuid": "54973bd8-9b30-4a74-9c75-f362967b10c1", 00:14:24.205 "assigned_rate_limits": { 00:14:24.205 "rw_ios_per_sec": 0, 00:14:24.205 "rw_mbytes_per_sec": 0, 00:14:24.205 "r_mbytes_per_sec": 0, 00:14:24.205 "w_mbytes_per_sec": 0 00:14:24.205 }, 00:14:24.205 "claimed": false, 00:14:24.205 "zoned": false, 00:14:24.205 "supported_io_types": { 00:14:24.205 "read": true, 00:14:24.205 "write": true, 00:14:24.205 "unmap": true, 00:14:24.205 "flush": true, 00:14:24.205 "reset": true, 00:14:24.205 "nvme_admin": false, 00:14:24.205 "nvme_io": false, 00:14:24.205 "nvme_io_md": false, 00:14:24.205 "write_zeroes": true, 00:14:24.205 "zcopy": true, 00:14:24.205 "get_zone_info": false, 00:14:24.205 "zone_management": false, 00:14:24.205 "zone_append": false, 00:14:24.205 "compare": false, 00:14:24.205 "compare_and_write": false, 00:14:24.205 "abort": true, 00:14:24.205 "seek_hole": false, 00:14:24.205 "seek_data": false, 00:14:24.205 "copy": true, 00:14:24.205 "nvme_iov_md": false 00:14:24.205 }, 00:14:24.205 "memory_domains": [ 00:14:24.205 { 00:14:24.205 "dma_device_id": "system", 00:14:24.205 "dma_device_type": 1 00:14:24.205 }, 00:14:24.205 { 00:14:24.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.205 "dma_device_type": 2 00:14:24.205 } 00:14:24.205 ], 00:14:24.205 "driver_specific": {} 00:14:24.205 } 00:14:24.205 ]' 00:14:24.205 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:14:24.205 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:14:24.205 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:14:24.205 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.205 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:24.205 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.205 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:14:24.205 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.205 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:24.205 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.205 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:14:24.205 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:14:24.205 05:24:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:14:24.205 00:14:24.205 real 0m0.121s 00:14:24.205 user 0m0.061s 00:14:24.205 sys 0m0.019s 00:14:24.205 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:24.205 05:24:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:24.205 ************************************ 00:14:24.205 END TEST rpc_plugins 00:14:24.205 ************************************ 00:14:24.205 05:24:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:14:24.205 05:24:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:24.205 05:24:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:24.205 05:24:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.205 ************************************ 00:14:24.205 START TEST rpc_trace_cmd_test 00:14:24.205 ************************************ 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:14:24.205 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56210", 00:14:24.205 "tpoint_group_mask": "0x8", 00:14:24.205 "iscsi_conn": { 00:14:24.205 "mask": "0x2", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "scsi": { 00:14:24.205 "mask": "0x4", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "bdev": { 00:14:24.205 "mask": "0x8", 00:14:24.205 "tpoint_mask": "0xffffffffffffffff" 00:14:24.205 }, 00:14:24.205 "nvmf_rdma": { 00:14:24.205 "mask": "0x10", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "nvmf_tcp": { 00:14:24.205 "mask": "0x20", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "ftl": { 00:14:24.205 "mask": "0x40", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "blobfs": { 00:14:24.205 "mask": "0x80", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "dsa": { 00:14:24.205 "mask": "0x200", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "thread": { 00:14:24.205 "mask": "0x400", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "nvme_pcie": { 00:14:24.205 "mask": "0x800", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "iaa": { 00:14:24.205 "mask": "0x1000", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "nvme_tcp": { 00:14:24.205 "mask": "0x2000", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "bdev_nvme": { 00:14:24.205 "mask": "0x4000", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "sock": { 00:14:24.205 "mask": "0x8000", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "blob": { 00:14:24.205 "mask": "0x10000", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "bdev_raid": { 00:14:24.205 "mask": "0x20000", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 }, 00:14:24.205 "scheduler": { 00:14:24.205 "mask": "0x40000", 00:14:24.205 "tpoint_mask": "0x0" 00:14:24.205 } 00:14:24.205 }' 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:14:24.205 05:24:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:14:24.468 05:24:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:14:24.468 05:24:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:14:24.468 05:24:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:14:24.468 05:24:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:14:24.468 05:24:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:14:24.468 05:24:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:14:24.468 05:24:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:14:24.468 00:14:24.468 real 0m0.208s 00:14:24.468 user 0m0.169s 00:14:24.468 sys 0m0.027s 00:14:24.468 05:24:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:24.468 05:24:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.468 ************************************ 00:14:24.468 END TEST rpc_trace_cmd_test 00:14:24.468 ************************************ 00:14:24.468 05:24:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:14:24.468 05:24:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:14:24.468 05:24:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:14:24.468 05:24:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:24.468 05:24:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:24.468 05:24:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.468 ************************************ 00:14:24.468 START TEST rpc_daemon_integrity 00:14:24.468 ************************************ 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:24.468 { 00:14:24.468 "name": "Malloc2", 00:14:24.468 "aliases": [ 00:14:24.468 "e86a3283-150e-46d6-ae13-b9f98b8938d6" 00:14:24.468 ], 00:14:24.468 "product_name": "Malloc disk", 00:14:24.468 "block_size": 512, 00:14:24.468 "num_blocks": 16384, 00:14:24.468 "uuid": "e86a3283-150e-46d6-ae13-b9f98b8938d6", 00:14:24.468 "assigned_rate_limits": { 00:14:24.468 "rw_ios_per_sec": 0, 00:14:24.468 "rw_mbytes_per_sec": 0, 00:14:24.468 "r_mbytes_per_sec": 0, 00:14:24.468 "w_mbytes_per_sec": 0 00:14:24.468 }, 00:14:24.468 "claimed": false, 00:14:24.468 "zoned": false, 00:14:24.468 "supported_io_types": { 00:14:24.468 "read": true, 00:14:24.468 "write": true, 00:14:24.468 "unmap": true, 00:14:24.468 "flush": true, 00:14:24.468 "reset": true, 00:14:24.468 "nvme_admin": false, 00:14:24.468 "nvme_io": false, 00:14:24.468 "nvme_io_md": false, 00:14:24.468 "write_zeroes": true, 00:14:24.468 "zcopy": true, 00:14:24.468 "get_zone_info": false, 00:14:24.468 "zone_management": false, 00:14:24.468 "zone_append": false, 00:14:24.468 "compare": false, 00:14:24.468 "compare_and_write": false, 00:14:24.468 "abort": true, 00:14:24.468 "seek_hole": false, 00:14:24.468 "seek_data": false, 00:14:24.468 "copy": true, 00:14:24.468 "nvme_iov_md": false 00:14:24.468 }, 00:14:24.468 "memory_domains": [ 00:14:24.468 { 00:14:24.468 "dma_device_id": "system", 00:14:24.468 "dma_device_type": 1 00:14:24.468 }, 00:14:24.468 { 00:14:24.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.468 "dma_device_type": 2 00:14:24.468 } 00:14:24.468 ], 00:14:24.468 "driver_specific": {} 00:14:24.468 } 00:14:24.468 ]' 00:14:24.468 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:24.729 [2024-11-20 05:24:56.317932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:14:24.729 [2024-11-20 05:24:56.318012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.729 [2024-11-20 05:24:56.318038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:24.729 [2024-11-20 05:24:56.318050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.729 [2024-11-20 05:24:56.320487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.729 [2024-11-20 05:24:56.320527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:24.729 Passthru0 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.729 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:24.729 { 00:14:24.729 "name": "Malloc2", 00:14:24.729 "aliases": [ 00:14:24.729 "e86a3283-150e-46d6-ae13-b9f98b8938d6" 00:14:24.729 ], 00:14:24.729 "product_name": "Malloc disk", 00:14:24.729 "block_size": 512, 00:14:24.729 "num_blocks": 16384, 00:14:24.729 "uuid": "e86a3283-150e-46d6-ae13-b9f98b8938d6", 00:14:24.729 "assigned_rate_limits": { 00:14:24.729 "rw_ios_per_sec": 0, 00:14:24.729 "rw_mbytes_per_sec": 0, 00:14:24.729 "r_mbytes_per_sec": 0, 00:14:24.729 "w_mbytes_per_sec": 0 00:14:24.729 }, 00:14:24.729 "claimed": true, 00:14:24.729 "claim_type": "exclusive_write", 00:14:24.729 "zoned": false, 00:14:24.729 "supported_io_types": { 00:14:24.729 "read": true, 00:14:24.729 "write": true, 00:14:24.729 "unmap": true, 00:14:24.729 "flush": true, 00:14:24.729 "reset": true, 00:14:24.729 "nvme_admin": false, 00:14:24.729 "nvme_io": false, 00:14:24.729 "nvme_io_md": false, 00:14:24.729 "write_zeroes": true, 00:14:24.729 "zcopy": true, 00:14:24.729 "get_zone_info": false, 00:14:24.729 "zone_management": false, 00:14:24.729 "zone_append": false, 00:14:24.729 "compare": false, 00:14:24.729 "compare_and_write": false, 00:14:24.729 "abort": true, 00:14:24.729 "seek_hole": false, 00:14:24.729 "seek_data": false, 00:14:24.729 "copy": true, 00:14:24.729 "nvme_iov_md": false 00:14:24.729 }, 00:14:24.729 "memory_domains": [ 00:14:24.729 { 00:14:24.729 "dma_device_id": "system", 00:14:24.729 "dma_device_type": 1 00:14:24.729 }, 00:14:24.729 { 00:14:24.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.729 "dma_device_type": 2 00:14:24.729 } 00:14:24.729 ], 00:14:24.729 "driver_specific": {} 00:14:24.729 }, 00:14:24.729 { 00:14:24.729 "name": "Passthru0", 00:14:24.729 "aliases": [ 00:14:24.729 "ffe0c0ca-a644-5a76-a321-403d2beb0fed" 00:14:24.729 ], 00:14:24.729 "product_name": "passthru", 00:14:24.729 "block_size": 512, 00:14:24.729 "num_blocks": 16384, 00:14:24.729 "uuid": "ffe0c0ca-a644-5a76-a321-403d2beb0fed", 00:14:24.729 "assigned_rate_limits": { 00:14:24.729 "rw_ios_per_sec": 0, 00:14:24.729 "rw_mbytes_per_sec": 0, 00:14:24.729 "r_mbytes_per_sec": 0, 00:14:24.729 "w_mbytes_per_sec": 0 00:14:24.729 }, 00:14:24.729 "claimed": false, 00:14:24.729 "zoned": false, 00:14:24.729 "supported_io_types": { 00:14:24.729 "read": true, 00:14:24.729 "write": true, 00:14:24.729 "unmap": true, 00:14:24.729 "flush": true, 00:14:24.729 "reset": true, 00:14:24.729 "nvme_admin": false, 00:14:24.729 "nvme_io": false, 00:14:24.729 "nvme_io_md": false, 00:14:24.729 "write_zeroes": true, 00:14:24.729 "zcopy": true, 00:14:24.729 "get_zone_info": false, 00:14:24.729 "zone_management": false, 00:14:24.729 "zone_append": false, 00:14:24.729 "compare": false, 00:14:24.729 "compare_and_write": false, 00:14:24.729 "abort": true, 00:14:24.729 "seek_hole": false, 00:14:24.729 "seek_data": false, 00:14:24.729 "copy": true, 00:14:24.730 "nvme_iov_md": false 00:14:24.730 }, 00:14:24.730 "memory_domains": [ 00:14:24.730 { 00:14:24.730 "dma_device_id": "system", 00:14:24.730 "dma_device_type": 1 00:14:24.730 }, 00:14:24.730 { 00:14:24.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.730 "dma_device_type": 2 00:14:24.730 } 00:14:24.730 ], 00:14:24.730 "driver_specific": { 00:14:24.730 "passthru": { 00:14:24.730 "name": "Passthru0", 00:14:24.730 "base_bdev_name": "Malloc2" 00:14:24.730 } 00:14:24.730 } 00:14:24.730 } 00:14:24.730 ]' 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:24.730 00:14:24.730 real 0m0.260s 00:14:24.730 user 0m0.136s 00:14:24.730 sys 0m0.034s 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:24.730 05:24:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:24.730 ************************************ 00:14:24.730 END TEST rpc_daemon_integrity 00:14:24.730 ************************************ 00:14:24.730 05:24:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:24.730 05:24:56 rpc -- rpc/rpc.sh@84 -- # killprocess 56210 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@952 -- # '[' -z 56210 ']' 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@956 -- # kill -0 56210 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@957 -- # uname 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56210 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:24.730 killing process with pid 56210 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56210' 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@971 -- # kill 56210 00:14:24.730 05:24:56 rpc -- common/autotest_common.sh@976 -- # wait 56210 00:14:26.641 00:14:26.641 real 0m3.821s 00:14:26.641 user 0m4.274s 00:14:26.641 sys 0m0.696s 00:14:26.641 05:24:58 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:26.641 ************************************ 00:14:26.641 END TEST rpc 00:14:26.641 ************************************ 00:14:26.641 05:24:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.641 05:24:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:26.641 05:24:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:26.641 05:24:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:26.641 05:24:58 -- common/autotest_common.sh@10 -- # set +x 00:14:26.641 ************************************ 00:14:26.641 START TEST skip_rpc 00:14:26.641 ************************************ 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:26.641 * Looking for test storage... 00:14:26.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.641 05:24:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:26.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.641 --rc genhtml_branch_coverage=1 00:14:26.641 --rc genhtml_function_coverage=1 00:14:26.641 --rc genhtml_legend=1 00:14:26.641 --rc geninfo_all_blocks=1 00:14:26.641 --rc geninfo_unexecuted_blocks=1 00:14:26.641 00:14:26.641 ' 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:26.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.641 --rc genhtml_branch_coverage=1 00:14:26.641 --rc genhtml_function_coverage=1 00:14:26.641 --rc genhtml_legend=1 00:14:26.641 --rc geninfo_all_blocks=1 00:14:26.641 --rc geninfo_unexecuted_blocks=1 00:14:26.641 00:14:26.641 ' 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:26.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.641 --rc genhtml_branch_coverage=1 00:14:26.641 --rc genhtml_function_coverage=1 00:14:26.641 --rc genhtml_legend=1 00:14:26.641 --rc geninfo_all_blocks=1 00:14:26.641 --rc geninfo_unexecuted_blocks=1 00:14:26.641 00:14:26.641 ' 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:26.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.641 --rc genhtml_branch_coverage=1 00:14:26.641 --rc genhtml_function_coverage=1 00:14:26.641 --rc genhtml_legend=1 00:14:26.641 --rc geninfo_all_blocks=1 00:14:26.641 --rc geninfo_unexecuted_blocks=1 00:14:26.641 00:14:26.641 ' 00:14:26.641 05:24:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:26.641 05:24:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:26.641 05:24:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:26.641 05:24:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.641 ************************************ 00:14:26.641 START TEST skip_rpc 00:14:26.641 ************************************ 00:14:26.641 05:24:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:14:26.641 05:24:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56428 00:14:26.641 05:24:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:26.641 05:24:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:14:26.641 05:24:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:14:26.641 [2024-11-20 05:24:58.436841] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:26.641 [2024-11-20 05:24:58.437178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56428 ] 00:14:26.902 [2024-11-20 05:24:58.598111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.902 [2024-11-20 05:24:58.718829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56428 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56428 ']' 00:14:32.199 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56428 00:14:32.200 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:14:32.200 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:32.200 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56428 00:14:32.200 killing process with pid 56428 00:14:32.200 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:32.200 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:32.200 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56428' 00:14:32.200 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56428 00:14:32.200 05:25:03 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56428 00:14:33.585 ************************************ 00:14:33.585 END TEST skip_rpc 00:14:33.585 ************************************ 00:14:33.585 00:14:33.585 real 0m6.657s 00:14:33.585 user 0m6.194s 00:14:33.585 sys 0m0.347s 00:14:33.585 05:25:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:33.585 05:25:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.585 05:25:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:14:33.585 05:25:05 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:33.585 05:25:05 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:33.585 05:25:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.585 ************************************ 00:14:33.585 START TEST skip_rpc_with_json 00:14:33.585 ************************************ 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56527 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56527 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 56527 ']' 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:33.585 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:33.585 [2024-11-20 05:25:05.114263] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:33.586 [2024-11-20 05:25:05.114387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56527 ] 00:14:33.586 [2024-11-20 05:25:05.265768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.586 [2024-11-20 05:25:05.352235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:34.156 [2024-11-20 05:25:05.947117] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:14:34.156 request: 00:14:34.156 { 00:14:34.156 "trtype": "tcp", 00:14:34.156 "method": "nvmf_get_transports", 00:14:34.156 "req_id": 1 00:14:34.156 } 00:14:34.156 Got JSON-RPC error response 00:14:34.156 response: 00:14:34.156 { 00:14:34.156 "code": -19, 00:14:34.156 "message": "No such device" 00:14:34.156 } 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:34.156 [2024-11-20 05:25:05.959220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.156 05:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:34.445 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.445 05:25:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:34.445 { 00:14:34.445 "subsystems": [ 00:14:34.445 { 00:14:34.445 "subsystem": "fsdev", 00:14:34.445 "config": [ 00:14:34.445 { 00:14:34.445 "method": "fsdev_set_opts", 00:14:34.445 "params": { 00:14:34.445 "fsdev_io_pool_size": 65535, 00:14:34.445 "fsdev_io_cache_size": 256 00:14:34.445 } 00:14:34.445 } 00:14:34.445 ] 00:14:34.445 }, 00:14:34.445 { 00:14:34.445 "subsystem": "keyring", 00:14:34.445 "config": [] 00:14:34.445 }, 00:14:34.445 { 00:14:34.445 "subsystem": "iobuf", 00:14:34.445 "config": [ 00:14:34.445 { 00:14:34.445 "method": "iobuf_set_options", 00:14:34.445 "params": { 00:14:34.445 "small_pool_count": 8192, 00:14:34.445 "large_pool_count": 1024, 00:14:34.445 "small_bufsize": 8192, 00:14:34.445 "large_bufsize": 135168, 00:14:34.445 "enable_numa": false 00:14:34.445 } 00:14:34.446 } 00:14:34.446 ] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "sock", 00:14:34.446 "config": [ 00:14:34.446 { 00:14:34.446 "method": "sock_set_default_impl", 00:14:34.446 "params": { 00:14:34.446 "impl_name": "posix" 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "sock_impl_set_options", 00:14:34.446 "params": { 00:14:34.446 "impl_name": "ssl", 00:14:34.446 "recv_buf_size": 4096, 00:14:34.446 "send_buf_size": 4096, 00:14:34.446 "enable_recv_pipe": true, 00:14:34.446 "enable_quickack": false, 00:14:34.446 "enable_placement_id": 0, 00:14:34.446 "enable_zerocopy_send_server": true, 00:14:34.446 "enable_zerocopy_send_client": false, 00:14:34.446 "zerocopy_threshold": 0, 00:14:34.446 "tls_version": 0, 00:14:34.446 "enable_ktls": false 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "sock_impl_set_options", 00:14:34.446 "params": { 00:14:34.446 "impl_name": "posix", 00:14:34.446 "recv_buf_size": 2097152, 00:14:34.446 "send_buf_size": 2097152, 00:14:34.446 "enable_recv_pipe": true, 00:14:34.446 "enable_quickack": false, 00:14:34.446 "enable_placement_id": 0, 00:14:34.446 "enable_zerocopy_send_server": true, 00:14:34.446 "enable_zerocopy_send_client": false, 00:14:34.446 "zerocopy_threshold": 0, 00:14:34.446 "tls_version": 0, 00:14:34.446 "enable_ktls": false 00:14:34.446 } 00:14:34.446 } 00:14:34.446 ] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "vmd", 00:14:34.446 "config": [] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "accel", 00:14:34.446 "config": [ 00:14:34.446 { 00:14:34.446 "method": "accel_set_options", 00:14:34.446 "params": { 00:14:34.446 "small_cache_size": 128, 00:14:34.446 "large_cache_size": 16, 00:14:34.446 "task_count": 2048, 00:14:34.446 "sequence_count": 2048, 00:14:34.446 "buf_count": 2048 00:14:34.446 } 00:14:34.446 } 00:14:34.446 ] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "bdev", 00:14:34.446 "config": [ 00:14:34.446 { 00:14:34.446 "method": "bdev_set_options", 00:14:34.446 "params": { 00:14:34.446 "bdev_io_pool_size": 65535, 00:14:34.446 "bdev_io_cache_size": 256, 00:14:34.446 "bdev_auto_examine": true, 00:14:34.446 "iobuf_small_cache_size": 128, 00:14:34.446 "iobuf_large_cache_size": 16 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "bdev_raid_set_options", 00:14:34.446 "params": { 00:14:34.446 "process_window_size_kb": 1024, 00:14:34.446 "process_max_bandwidth_mb_sec": 0 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "bdev_iscsi_set_options", 00:14:34.446 "params": { 00:14:34.446 "timeout_sec": 30 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "bdev_nvme_set_options", 00:14:34.446 "params": { 00:14:34.446 "action_on_timeout": "none", 00:14:34.446 "timeout_us": 0, 00:14:34.446 "timeout_admin_us": 0, 00:14:34.446 "keep_alive_timeout_ms": 10000, 00:14:34.446 "arbitration_burst": 0, 00:14:34.446 "low_priority_weight": 0, 00:14:34.446 "medium_priority_weight": 0, 00:14:34.446 "high_priority_weight": 0, 00:14:34.446 "nvme_adminq_poll_period_us": 10000, 00:14:34.446 "nvme_ioq_poll_period_us": 0, 00:14:34.446 "io_queue_requests": 0, 00:14:34.446 "delay_cmd_submit": true, 00:14:34.446 "transport_retry_count": 4, 00:14:34.446 "bdev_retry_count": 3, 00:14:34.446 "transport_ack_timeout": 0, 00:14:34.446 "ctrlr_loss_timeout_sec": 0, 00:14:34.446 "reconnect_delay_sec": 0, 00:14:34.446 "fast_io_fail_timeout_sec": 0, 00:14:34.446 "disable_auto_failback": false, 00:14:34.446 "generate_uuids": false, 00:14:34.446 "transport_tos": 0, 00:14:34.446 "nvme_error_stat": false, 00:14:34.446 "rdma_srq_size": 0, 00:14:34.446 "io_path_stat": false, 00:14:34.446 "allow_accel_sequence": false, 00:14:34.446 "rdma_max_cq_size": 0, 00:14:34.446 "rdma_cm_event_timeout_ms": 0, 00:14:34.446 "dhchap_digests": [ 00:14:34.446 "sha256", 00:14:34.446 "sha384", 00:14:34.446 "sha512" 00:14:34.446 ], 00:14:34.446 "dhchap_dhgroups": [ 00:14:34.446 "null", 00:14:34.446 "ffdhe2048", 00:14:34.446 "ffdhe3072", 00:14:34.446 "ffdhe4096", 00:14:34.446 "ffdhe6144", 00:14:34.446 "ffdhe8192" 00:14:34.446 ] 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "bdev_nvme_set_hotplug", 00:14:34.446 "params": { 00:14:34.446 "period_us": 100000, 00:14:34.446 "enable": false 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "bdev_wait_for_examine" 00:14:34.446 } 00:14:34.446 ] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "scsi", 00:14:34.446 "config": null 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "scheduler", 00:14:34.446 "config": [ 00:14:34.446 { 00:14:34.446 "method": "framework_set_scheduler", 00:14:34.446 "params": { 00:14:34.446 "name": "static" 00:14:34.446 } 00:14:34.446 } 00:14:34.446 ] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "vhost_scsi", 00:14:34.446 "config": [] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "vhost_blk", 00:14:34.446 "config": [] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "ublk", 00:14:34.446 "config": [] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "nbd", 00:14:34.446 "config": [] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "nvmf", 00:14:34.446 "config": [ 00:14:34.446 { 00:14:34.446 "method": "nvmf_set_config", 00:14:34.446 "params": { 00:14:34.446 "discovery_filter": "match_any", 00:14:34.446 "admin_cmd_passthru": { 00:14:34.446 "identify_ctrlr": false 00:14:34.446 }, 00:14:34.446 "dhchap_digests": [ 00:14:34.446 "sha256", 00:14:34.446 "sha384", 00:14:34.446 "sha512" 00:14:34.446 ], 00:14:34.446 "dhchap_dhgroups": [ 00:14:34.446 "null", 00:14:34.446 "ffdhe2048", 00:14:34.446 "ffdhe3072", 00:14:34.446 "ffdhe4096", 00:14:34.446 "ffdhe6144", 00:14:34.446 "ffdhe8192" 00:14:34.446 ] 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "nvmf_set_max_subsystems", 00:14:34.446 "params": { 00:14:34.446 "max_subsystems": 1024 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "nvmf_set_crdt", 00:14:34.446 "params": { 00:14:34.446 "crdt1": 0, 00:14:34.446 "crdt2": 0, 00:14:34.446 "crdt3": 0 00:14:34.446 } 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "method": "nvmf_create_transport", 00:14:34.446 "params": { 00:14:34.446 "trtype": "TCP", 00:14:34.446 "max_queue_depth": 128, 00:14:34.446 "max_io_qpairs_per_ctrlr": 127, 00:14:34.446 "in_capsule_data_size": 4096, 00:14:34.446 "max_io_size": 131072, 00:14:34.446 "io_unit_size": 131072, 00:14:34.446 "max_aq_depth": 128, 00:14:34.446 "num_shared_buffers": 511, 00:14:34.446 "buf_cache_size": 4294967295, 00:14:34.446 "dif_insert_or_strip": false, 00:14:34.446 "zcopy": false, 00:14:34.446 "c2h_success": true, 00:14:34.446 "sock_priority": 0, 00:14:34.446 "abort_timeout_sec": 1, 00:14:34.446 "ack_timeout": 0, 00:14:34.446 "data_wr_pool_size": 0 00:14:34.446 } 00:14:34.446 } 00:14:34.446 ] 00:14:34.446 }, 00:14:34.446 { 00:14:34.446 "subsystem": "iscsi", 00:14:34.446 "config": [ 00:14:34.446 { 00:14:34.446 "method": "iscsi_set_options", 00:14:34.446 "params": { 00:14:34.446 "node_base": "iqn.2016-06.io.spdk", 00:14:34.446 "max_sessions": 128, 00:14:34.446 "max_connections_per_session": 2, 00:14:34.446 "max_queue_depth": 64, 00:14:34.446 "default_time2wait": 2, 00:14:34.446 "default_time2retain": 20, 00:14:34.446 "first_burst_length": 8192, 00:14:34.446 "immediate_data": true, 00:14:34.446 "allow_duplicated_isid": false, 00:14:34.446 "error_recovery_level": 0, 00:14:34.446 "nop_timeout": 60, 00:14:34.446 "nop_in_interval": 30, 00:14:34.446 "disable_chap": false, 00:14:34.446 "require_chap": false, 00:14:34.446 "mutual_chap": false, 00:14:34.446 "chap_group": 0, 00:14:34.446 "max_large_datain_per_connection": 64, 00:14:34.446 "max_r2t_per_connection": 4, 00:14:34.446 "pdu_pool_size": 36864, 00:14:34.446 "immediate_data_pool_size": 16384, 00:14:34.446 "data_out_pool_size": 2048 00:14:34.446 } 00:14:34.446 } 00:14:34.446 ] 00:14:34.446 } 00:14:34.446 ] 00:14:34.446 } 00:14:34.446 05:25:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:34.446 05:25:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56527 00:14:34.446 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56527 ']' 00:14:34.446 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56527 00:14:34.447 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:14:34.447 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:34.447 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56527 00:14:34.447 killing process with pid 56527 00:14:34.447 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:34.447 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:34.447 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56527' 00:14:34.447 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56527 00:14:34.447 05:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56527 00:14:36.385 05:25:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56566 00:14:36.385 05:25:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:14:36.385 05:25:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56566 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56566 ']' 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56566 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56566 00:14:41.688 killing process with pid 56566 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56566' 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56566 00:14:41.688 05:25:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56566 00:14:42.629 05:25:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:42.629 05:25:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:42.629 ************************************ 00:14:42.629 END TEST skip_rpc_with_json 00:14:42.629 ************************************ 00:14:42.629 00:14:42.629 real 0m9.386s 00:14:42.629 user 0m8.929s 00:14:42.629 sys 0m0.646s 00:14:42.629 05:25:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:42.629 05:25:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:14:42.891 05:25:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:14:42.891 05:25:14 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:42.891 05:25:14 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:42.891 05:25:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.891 ************************************ 00:14:42.891 START TEST skip_rpc_with_delay 00:14:42.891 ************************************ 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:42.891 [2024-11-20 05:25:14.563353] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:42.891 00:14:42.891 real 0m0.133s 00:14:42.891 user 0m0.064s 00:14:42.891 sys 0m0.067s 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:42.891 ************************************ 00:14:42.891 END TEST skip_rpc_with_delay 00:14:42.891 ************************************ 00:14:42.891 05:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:14:42.891 05:25:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:14:42.891 05:25:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:14:42.891 05:25:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:14:42.891 05:25:14 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:42.891 05:25:14 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:42.891 05:25:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.891 ************************************ 00:14:42.891 START TEST exit_on_failed_rpc_init 00:14:42.891 ************************************ 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56694 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56694 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 56694 ']' 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:42.891 05:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:14:43.153 [2024-11-20 05:25:14.734387] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:43.153 [2024-11-20 05:25:14.734498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56694 ] 00:14:43.153 [2024-11-20 05:25:14.895663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.415 [2024-11-20 05:25:15.015984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:14:43.987 05:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:43.987 [2024-11-20 05:25:15.755897] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:43.987 [2024-11-20 05:25:15.756026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56712 ] 00:14:44.248 [2024-11-20 05:25:15.912476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.248 [2024-11-20 05:25:16.031074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.248 [2024-11-20 05:25:16.031179] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:14:44.248 [2024-11-20 05:25:16.031194] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:14:44.248 [2024-11-20 05:25:16.031205] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56694 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 56694 ']' 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 56694 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56694 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56694' 00:14:44.509 killing process with pid 56694 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 56694 00:14:44.509 05:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 56694 00:14:46.423 ************************************ 00:14:46.423 00:14:46.423 real 0m3.186s 00:14:46.423 user 0m3.466s 00:14:46.423 sys 0m0.474s 00:14:46.423 05:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.423 05:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:14:46.423 END TEST exit_on_failed_rpc_init 00:14:46.423 ************************************ 00:14:46.423 05:25:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:46.423 ************************************ 00:14:46.423 END TEST skip_rpc 00:14:46.423 ************************************ 00:14:46.423 00:14:46.423 real 0m19.695s 00:14:46.423 user 0m18.795s 00:14:46.423 sys 0m1.714s 00:14:46.423 05:25:17 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.423 05:25:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.423 05:25:17 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:14:46.423 05:25:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:46.423 05:25:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.423 05:25:17 -- common/autotest_common.sh@10 -- # set +x 00:14:46.423 ************************************ 00:14:46.423 START TEST rpc_client 00:14:46.423 ************************************ 00:14:46.423 05:25:17 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:14:46.423 * Looking for test storage... 00:14:46.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:14:46.423 05:25:17 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:46.423 05:25:17 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:46.423 05:25:17 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:14:46.423 05:25:18 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.423 05:25:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:14:46.423 05:25:18 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.423 05:25:18 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:46.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.423 --rc genhtml_branch_coverage=1 00:14:46.423 --rc genhtml_function_coverage=1 00:14:46.423 --rc genhtml_legend=1 00:14:46.423 --rc geninfo_all_blocks=1 00:14:46.424 --rc geninfo_unexecuted_blocks=1 00:14:46.424 00:14:46.424 ' 00:14:46.424 05:25:18 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:46.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.424 --rc genhtml_branch_coverage=1 00:14:46.424 --rc genhtml_function_coverage=1 00:14:46.424 --rc genhtml_legend=1 00:14:46.424 --rc geninfo_all_blocks=1 00:14:46.424 --rc geninfo_unexecuted_blocks=1 00:14:46.424 00:14:46.424 ' 00:14:46.424 05:25:18 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:46.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.424 --rc genhtml_branch_coverage=1 00:14:46.424 --rc genhtml_function_coverage=1 00:14:46.424 --rc genhtml_legend=1 00:14:46.424 --rc geninfo_all_blocks=1 00:14:46.424 --rc geninfo_unexecuted_blocks=1 00:14:46.424 00:14:46.424 ' 00:14:46.424 05:25:18 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:46.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.424 --rc genhtml_branch_coverage=1 00:14:46.424 --rc genhtml_function_coverage=1 00:14:46.424 --rc genhtml_legend=1 00:14:46.424 --rc geninfo_all_blocks=1 00:14:46.424 --rc geninfo_unexecuted_blocks=1 00:14:46.424 00:14:46.424 ' 00:14:46.424 05:25:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:14:46.424 OK 00:14:46.424 05:25:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:14:46.424 00:14:46.424 real 0m0.197s 00:14:46.424 user 0m0.113s 00:14:46.424 sys 0m0.089s 00:14:46.424 05:25:18 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.424 ************************************ 00:14:46.424 05:25:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:14:46.424 END TEST rpc_client 00:14:46.424 ************************************ 00:14:46.424 05:25:18 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:14:46.424 05:25:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:46.424 05:25:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.424 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:14:46.424 ************************************ 00:14:46.424 START TEST json_config 00:14:46.424 ************************************ 00:14:46.424 05:25:18 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:14:46.424 05:25:18 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:46.424 05:25:18 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:46.424 05:25:18 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:14:46.685 05:25:18 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:46.685 05:25:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.685 05:25:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.685 05:25:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.685 05:25:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.685 05:25:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.685 05:25:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.685 05:25:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.685 05:25:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.685 05:25:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.685 05:25:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.685 05:25:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.685 05:25:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:14:46.685 05:25:18 json_config -- scripts/common.sh@345 -- # : 1 00:14:46.685 05:25:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.685 05:25:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.685 05:25:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:14:46.685 05:25:18 json_config -- scripts/common.sh@353 -- # local d=1 00:14:46.685 05:25:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.685 05:25:18 json_config -- scripts/common.sh@355 -- # echo 1 00:14:46.685 05:25:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.685 05:25:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:14:46.685 05:25:18 json_config -- scripts/common.sh@353 -- # local d=2 00:14:46.685 05:25:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.685 05:25:18 json_config -- scripts/common.sh@355 -- # echo 2 00:14:46.685 05:25:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.685 05:25:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.685 05:25:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.685 05:25:18 json_config -- scripts/common.sh@368 -- # return 0 00:14:46.685 05:25:18 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.685 05:25:18 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.686 --rc genhtml_branch_coverage=1 00:14:46.686 --rc genhtml_function_coverage=1 00:14:46.686 --rc genhtml_legend=1 00:14:46.686 --rc geninfo_all_blocks=1 00:14:46.686 --rc geninfo_unexecuted_blocks=1 00:14:46.686 00:14:46.686 ' 00:14:46.686 05:25:18 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:46.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.686 --rc genhtml_branch_coverage=1 00:14:46.686 --rc genhtml_function_coverage=1 00:14:46.686 --rc genhtml_legend=1 00:14:46.686 --rc geninfo_all_blocks=1 00:14:46.686 --rc geninfo_unexecuted_blocks=1 00:14:46.686 00:14:46.686 ' 00:14:46.686 05:25:18 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:46.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.686 --rc genhtml_branch_coverage=1 00:14:46.686 --rc genhtml_function_coverage=1 00:14:46.686 --rc genhtml_legend=1 00:14:46.686 --rc geninfo_all_blocks=1 00:14:46.686 --rc geninfo_unexecuted_blocks=1 00:14:46.686 00:14:46.686 ' 00:14:46.686 05:25:18 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:46.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.686 --rc genhtml_branch_coverage=1 00:14:46.686 --rc genhtml_function_coverage=1 00:14:46.686 --rc genhtml_legend=1 00:14:46.686 --rc geninfo_all_blocks=1 00:14:46.686 --rc geninfo_unexecuted_blocks=1 00:14:46.686 00:14:46.686 ' 00:14:46.686 05:25:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe49e63-e03b-4663-9d3a-018d85cb6e68 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe49e63-e03b-4663-9d3a-018d85cb6e68 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.686 05:25:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:14:46.686 05:25:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.686 05:25:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.686 05:25:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.686 05:25:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.686 05:25:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.686 05:25:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.686 05:25:18 json_config -- paths/export.sh@5 -- # export PATH 00:14:46.686 05:25:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@51 -- # : 0 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:46.686 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:46.686 05:25:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:46.686 05:25:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:14:46.686 05:25:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:14:46.686 05:25:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:14:46.686 05:25:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:14:46.686 WARNING: No tests are enabled so not running JSON configuration tests 00:14:46.686 05:25:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:14:46.686 05:25:18 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:14:46.686 05:25:18 json_config -- json_config/json_config.sh@28 -- # exit 0 00:14:46.686 00:14:46.686 real 0m0.156s 00:14:46.686 user 0m0.096s 00:14:46.686 sys 0m0.063s 00:14:46.686 ************************************ 00:14:46.686 END TEST json_config 00:14:46.686 ************************************ 00:14:46.686 05:25:18 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.686 05:25:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:46.686 05:25:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:14:46.686 05:25:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:46.686 05:25:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.686 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:14:46.686 ************************************ 00:14:46.687 START TEST json_config_extra_key 00:14:46.687 ************************************ 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:46.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.687 --rc genhtml_branch_coverage=1 00:14:46.687 --rc genhtml_function_coverage=1 00:14:46.687 --rc genhtml_legend=1 00:14:46.687 --rc geninfo_all_blocks=1 00:14:46.687 --rc geninfo_unexecuted_blocks=1 00:14:46.687 00:14:46.687 ' 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:46.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.687 --rc genhtml_branch_coverage=1 00:14:46.687 --rc genhtml_function_coverage=1 00:14:46.687 --rc genhtml_legend=1 00:14:46.687 --rc geninfo_all_blocks=1 00:14:46.687 --rc geninfo_unexecuted_blocks=1 00:14:46.687 00:14:46.687 ' 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:46.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.687 --rc genhtml_branch_coverage=1 00:14:46.687 --rc genhtml_function_coverage=1 00:14:46.687 --rc genhtml_legend=1 00:14:46.687 --rc geninfo_all_blocks=1 00:14:46.687 --rc geninfo_unexecuted_blocks=1 00:14:46.687 00:14:46.687 ' 00:14:46.687 05:25:18 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:46.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.687 --rc genhtml_branch_coverage=1 00:14:46.687 --rc genhtml_function_coverage=1 00:14:46.687 --rc genhtml_legend=1 00:14:46.687 --rc geninfo_all_blocks=1 00:14:46.687 --rc geninfo_unexecuted_blocks=1 00:14:46.687 00:14:46.687 ' 00:14:46.687 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe49e63-e03b-4663-9d3a-018d85cb6e68 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe49e63-e03b-4663-9d3a-018d85cb6e68 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.687 05:25:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.687 05:25:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:14:46.948 05:25:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.948 05:25:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.948 05:25:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.948 05:25:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.948 05:25:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.948 05:25:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.948 05:25:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:14:46.948 05:25:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:46.948 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:46.948 05:25:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:14:46.948 INFO: launching applications... 00:14:46.948 05:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:14:46.948 05:25:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:14:46.948 05:25:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:14:46.948 05:25:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:46.948 05:25:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:46.948 05:25:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:14:46.948 05:25:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:46.948 05:25:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:46.948 Waiting for target to run... 00:14:46.948 05:25:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56906 00:14:46.948 05:25:18 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:14:46.949 05:25:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:46.949 05:25:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56906 /var/tmp/spdk_tgt.sock 00:14:46.949 05:25:18 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 56906 ']' 00:14:46.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:46.949 05:25:18 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:46.949 05:25:18 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.949 05:25:18 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:46.949 05:25:18 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.949 05:25:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:46.949 [2024-11-20 05:25:18.604017] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:46.949 [2024-11-20 05:25:18.604128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56906 ] 00:14:47.211 [2024-11-20 05:25:18.937786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.472 [2024-11-20 05:25:19.050203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.042 00:14:48.042 INFO: shutting down applications... 00:14:48.042 05:25:19 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:48.042 05:25:19 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:14:48.042 05:25:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:14:48.042 05:25:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:14:48.042 05:25:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:14:48.042 05:25:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:14:48.042 05:25:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:48.042 05:25:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56906 ]] 00:14:48.042 05:25:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56906 00:14:48.042 05:25:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:48.042 05:25:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:48.042 05:25:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56906 00:14:48.042 05:25:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:48.304 05:25:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:48.304 05:25:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:48.304 05:25:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56906 00:14:48.304 05:25:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:48.875 05:25:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:48.875 05:25:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:48.875 05:25:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56906 00:14:48.875 05:25:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:49.447 05:25:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:49.447 05:25:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:49.447 05:25:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56906 00:14:49.447 05:25:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:50.019 05:25:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:50.019 05:25:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:50.019 05:25:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56906 00:14:50.019 05:25:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:50.019 SPDK target shutdown done 00:14:50.019 05:25:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:14:50.019 05:25:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:50.019 05:25:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:50.019 Success 00:14:50.019 05:25:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:14:50.019 00:14:50.019 real 0m3.219s 00:14:50.019 user 0m2.963s 00:14:50.019 sys 0m0.431s 00:14:50.019 05:25:21 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:50.019 05:25:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:50.019 ************************************ 00:14:50.019 END TEST json_config_extra_key 00:14:50.019 ************************************ 00:14:50.019 05:25:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:50.019 05:25:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:50.019 05:25:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:50.019 05:25:21 -- common/autotest_common.sh@10 -- # set +x 00:14:50.019 ************************************ 00:14:50.019 START TEST alias_rpc 00:14:50.019 ************************************ 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:50.019 * Looking for test storage... 00:14:50.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.019 05:25:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:50.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.019 --rc genhtml_branch_coverage=1 00:14:50.019 --rc genhtml_function_coverage=1 00:14:50.019 --rc genhtml_legend=1 00:14:50.019 --rc geninfo_all_blocks=1 00:14:50.019 --rc geninfo_unexecuted_blocks=1 00:14:50.019 00:14:50.019 ' 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:50.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.019 --rc genhtml_branch_coverage=1 00:14:50.019 --rc genhtml_function_coverage=1 00:14:50.019 --rc genhtml_legend=1 00:14:50.019 --rc geninfo_all_blocks=1 00:14:50.019 --rc geninfo_unexecuted_blocks=1 00:14:50.019 00:14:50.019 ' 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:50.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.019 --rc genhtml_branch_coverage=1 00:14:50.019 --rc genhtml_function_coverage=1 00:14:50.019 --rc genhtml_legend=1 00:14:50.019 --rc geninfo_all_blocks=1 00:14:50.019 --rc geninfo_unexecuted_blocks=1 00:14:50.019 00:14:50.019 ' 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:50.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.019 --rc genhtml_branch_coverage=1 00:14:50.019 --rc genhtml_function_coverage=1 00:14:50.019 --rc genhtml_legend=1 00:14:50.019 --rc geninfo_all_blocks=1 00:14:50.019 --rc geninfo_unexecuted_blocks=1 00:14:50.019 00:14:50.019 ' 00:14:50.019 05:25:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:50.019 05:25:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57004 00:14:50.019 05:25:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57004 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57004 ']' 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:50.019 05:25:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.019 05:25:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:50.280 [2024-11-20 05:25:21.887745] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:50.280 [2024-11-20 05:25:21.887905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57004 ] 00:14:50.280 [2024-11-20 05:25:22.050043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.558 [2024-11-20 05:25:22.170903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.130 05:25:22 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:51.130 05:25:22 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:14:51.130 05:25:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:14:51.393 05:25:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57004 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57004 ']' 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57004 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57004 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:51.393 killing process with pid 57004 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57004' 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@971 -- # kill 57004 00:14:51.393 05:25:23 alias_rpc -- common/autotest_common.sh@976 -- # wait 57004 00:14:53.338 00:14:53.338 real 0m3.075s 00:14:53.338 user 0m3.148s 00:14:53.338 sys 0m0.499s 00:14:53.338 05:25:24 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:53.338 05:25:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.338 ************************************ 00:14:53.338 END TEST alias_rpc 00:14:53.338 ************************************ 00:14:53.338 05:25:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:14:53.338 05:25:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:14:53.338 05:25:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:53.338 05:25:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:53.338 05:25:24 -- common/autotest_common.sh@10 -- # set +x 00:14:53.338 ************************************ 00:14:53.338 START TEST spdkcli_tcp 00:14:53.338 ************************************ 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:14:53.338 * Looking for test storage... 00:14:53.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.338 05:25:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:53.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.338 --rc genhtml_branch_coverage=1 00:14:53.338 --rc genhtml_function_coverage=1 00:14:53.338 --rc genhtml_legend=1 00:14:53.338 --rc geninfo_all_blocks=1 00:14:53.338 --rc geninfo_unexecuted_blocks=1 00:14:53.338 00:14:53.338 ' 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:53.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.338 --rc genhtml_branch_coverage=1 00:14:53.338 --rc genhtml_function_coverage=1 00:14:53.338 --rc genhtml_legend=1 00:14:53.338 --rc geninfo_all_blocks=1 00:14:53.338 --rc geninfo_unexecuted_blocks=1 00:14:53.338 00:14:53.338 ' 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:53.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.338 --rc genhtml_branch_coverage=1 00:14:53.338 --rc genhtml_function_coverage=1 00:14:53.338 --rc genhtml_legend=1 00:14:53.338 --rc geninfo_all_blocks=1 00:14:53.338 --rc geninfo_unexecuted_blocks=1 00:14:53.338 00:14:53.338 ' 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:53.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.338 --rc genhtml_branch_coverage=1 00:14:53.338 --rc genhtml_function_coverage=1 00:14:53.338 --rc genhtml_legend=1 00:14:53.338 --rc geninfo_all_blocks=1 00:14:53.338 --rc geninfo_unexecuted_blocks=1 00:14:53.338 00:14:53.338 ' 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57100 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57100 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57100 ']' 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:53.338 05:25:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.338 05:25:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:14:53.338 [2024-11-20 05:25:25.019801] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:53.339 [2024-11-20 05:25:25.019972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57100 ] 00:14:53.601 [2024-11-20 05:25:25.181215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:53.601 [2024-11-20 05:25:25.302891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.601 [2024-11-20 05:25:25.302988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.226 05:25:25 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.226 05:25:25 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:14:54.226 05:25:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57117 00:14:54.226 05:25:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:14:54.226 05:25:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:14:54.488 [ 00:14:54.488 "bdev_malloc_delete", 00:14:54.488 "bdev_malloc_create", 00:14:54.488 "bdev_null_resize", 00:14:54.488 "bdev_null_delete", 00:14:54.488 "bdev_null_create", 00:14:54.488 "bdev_nvme_cuse_unregister", 00:14:54.488 "bdev_nvme_cuse_register", 00:14:54.488 "bdev_opal_new_user", 00:14:54.488 "bdev_opal_set_lock_state", 00:14:54.488 "bdev_opal_delete", 00:14:54.488 "bdev_opal_get_info", 00:14:54.488 "bdev_opal_create", 00:14:54.488 "bdev_nvme_opal_revert", 00:14:54.488 "bdev_nvme_opal_init", 00:14:54.488 "bdev_nvme_send_cmd", 00:14:54.488 "bdev_nvme_set_keys", 00:14:54.488 "bdev_nvme_get_path_iostat", 00:14:54.488 "bdev_nvme_get_mdns_discovery_info", 00:14:54.488 "bdev_nvme_stop_mdns_discovery", 00:14:54.488 "bdev_nvme_start_mdns_discovery", 00:14:54.488 "bdev_nvme_set_multipath_policy", 00:14:54.488 "bdev_nvme_set_preferred_path", 00:14:54.488 "bdev_nvme_get_io_paths", 00:14:54.488 "bdev_nvme_remove_error_injection", 00:14:54.488 "bdev_nvme_add_error_injection", 00:14:54.488 "bdev_nvme_get_discovery_info", 00:14:54.488 "bdev_nvme_stop_discovery", 00:14:54.488 "bdev_nvme_start_discovery", 00:14:54.488 "bdev_nvme_get_controller_health_info", 00:14:54.489 "bdev_nvme_disable_controller", 00:14:54.489 "bdev_nvme_enable_controller", 00:14:54.489 "bdev_nvme_reset_controller", 00:14:54.489 "bdev_nvme_get_transport_statistics", 00:14:54.489 "bdev_nvme_apply_firmware", 00:14:54.489 "bdev_nvme_detach_controller", 00:14:54.489 "bdev_nvme_get_controllers", 00:14:54.489 "bdev_nvme_attach_controller", 00:14:54.489 "bdev_nvme_set_hotplug", 00:14:54.489 "bdev_nvme_set_options", 00:14:54.489 "bdev_passthru_delete", 00:14:54.489 "bdev_passthru_create", 00:14:54.489 "bdev_lvol_set_parent_bdev", 00:14:54.489 "bdev_lvol_set_parent", 00:14:54.489 "bdev_lvol_check_shallow_copy", 00:14:54.489 "bdev_lvol_start_shallow_copy", 00:14:54.489 "bdev_lvol_grow_lvstore", 00:14:54.489 "bdev_lvol_get_lvols", 00:14:54.489 "bdev_lvol_get_lvstores", 00:14:54.489 "bdev_lvol_delete", 00:14:54.489 "bdev_lvol_set_read_only", 00:14:54.489 "bdev_lvol_resize", 00:14:54.489 "bdev_lvol_decouple_parent", 00:14:54.489 "bdev_lvol_inflate", 00:14:54.489 "bdev_lvol_rename", 00:14:54.489 "bdev_lvol_clone_bdev", 00:14:54.489 "bdev_lvol_clone", 00:14:54.489 "bdev_lvol_snapshot", 00:14:54.489 "bdev_lvol_create", 00:14:54.489 "bdev_lvol_delete_lvstore", 00:14:54.489 "bdev_lvol_rename_lvstore", 00:14:54.489 "bdev_lvol_create_lvstore", 00:14:54.489 "bdev_raid_set_options", 00:14:54.489 "bdev_raid_remove_base_bdev", 00:14:54.489 "bdev_raid_add_base_bdev", 00:14:54.489 "bdev_raid_delete", 00:14:54.489 "bdev_raid_create", 00:14:54.489 "bdev_raid_get_bdevs", 00:14:54.489 "bdev_error_inject_error", 00:14:54.489 "bdev_error_delete", 00:14:54.489 "bdev_error_create", 00:14:54.489 "bdev_split_delete", 00:14:54.489 "bdev_split_create", 00:14:54.489 "bdev_delay_delete", 00:14:54.489 "bdev_delay_create", 00:14:54.489 "bdev_delay_update_latency", 00:14:54.489 "bdev_zone_block_delete", 00:14:54.489 "bdev_zone_block_create", 00:14:54.489 "blobfs_create", 00:14:54.489 "blobfs_detect", 00:14:54.489 "blobfs_set_cache_size", 00:14:54.489 "bdev_aio_delete", 00:14:54.489 "bdev_aio_rescan", 00:14:54.489 "bdev_aio_create", 00:14:54.489 "bdev_ftl_set_property", 00:14:54.489 "bdev_ftl_get_properties", 00:14:54.489 "bdev_ftl_get_stats", 00:14:54.489 "bdev_ftl_unmap", 00:14:54.489 "bdev_ftl_unload", 00:14:54.489 "bdev_ftl_delete", 00:14:54.489 "bdev_ftl_load", 00:14:54.489 "bdev_ftl_create", 00:14:54.489 "bdev_virtio_attach_controller", 00:14:54.489 "bdev_virtio_scsi_get_devices", 00:14:54.489 "bdev_virtio_detach_controller", 00:14:54.489 "bdev_virtio_blk_set_hotplug", 00:14:54.489 "bdev_iscsi_delete", 00:14:54.489 "bdev_iscsi_create", 00:14:54.489 "bdev_iscsi_set_options", 00:14:54.489 "accel_error_inject_error", 00:14:54.489 "ioat_scan_accel_module", 00:14:54.489 "dsa_scan_accel_module", 00:14:54.489 "iaa_scan_accel_module", 00:14:54.489 "keyring_file_remove_key", 00:14:54.489 "keyring_file_add_key", 00:14:54.489 "keyring_linux_set_options", 00:14:54.489 "fsdev_aio_delete", 00:14:54.489 "fsdev_aio_create", 00:14:54.489 "iscsi_get_histogram", 00:14:54.489 "iscsi_enable_histogram", 00:14:54.489 "iscsi_set_options", 00:14:54.489 "iscsi_get_auth_groups", 00:14:54.489 "iscsi_auth_group_remove_secret", 00:14:54.489 "iscsi_auth_group_add_secret", 00:14:54.489 "iscsi_delete_auth_group", 00:14:54.489 "iscsi_create_auth_group", 00:14:54.489 "iscsi_set_discovery_auth", 00:14:54.489 "iscsi_get_options", 00:14:54.489 "iscsi_target_node_request_logout", 00:14:54.489 "iscsi_target_node_set_redirect", 00:14:54.489 "iscsi_target_node_set_auth", 00:14:54.489 "iscsi_target_node_add_lun", 00:14:54.489 "iscsi_get_stats", 00:14:54.489 "iscsi_get_connections", 00:14:54.489 "iscsi_portal_group_set_auth", 00:14:54.489 "iscsi_start_portal_group", 00:14:54.489 "iscsi_delete_portal_group", 00:14:54.489 "iscsi_create_portal_group", 00:14:54.489 "iscsi_get_portal_groups", 00:14:54.489 "iscsi_delete_target_node", 00:14:54.489 "iscsi_target_node_remove_pg_ig_maps", 00:14:54.489 "iscsi_target_node_add_pg_ig_maps", 00:14:54.489 "iscsi_create_target_node", 00:14:54.489 "iscsi_get_target_nodes", 00:14:54.489 "iscsi_delete_initiator_group", 00:14:54.489 "iscsi_initiator_group_remove_initiators", 00:14:54.489 "iscsi_initiator_group_add_initiators", 00:14:54.489 "iscsi_create_initiator_group", 00:14:54.489 "iscsi_get_initiator_groups", 00:14:54.489 "nvmf_set_crdt", 00:14:54.489 "nvmf_set_config", 00:14:54.489 "nvmf_set_max_subsystems", 00:14:54.489 "nvmf_stop_mdns_prr", 00:14:54.489 "nvmf_publish_mdns_prr", 00:14:54.489 "nvmf_subsystem_get_listeners", 00:14:54.489 "nvmf_subsystem_get_qpairs", 00:14:54.489 "nvmf_subsystem_get_controllers", 00:14:54.489 "nvmf_get_stats", 00:14:54.489 "nvmf_get_transports", 00:14:54.489 "nvmf_create_transport", 00:14:54.489 "nvmf_get_targets", 00:14:54.489 "nvmf_delete_target", 00:14:54.489 "nvmf_create_target", 00:14:54.489 "nvmf_subsystem_allow_any_host", 00:14:54.489 "nvmf_subsystem_set_keys", 00:14:54.489 "nvmf_subsystem_remove_host", 00:14:54.489 "nvmf_subsystem_add_host", 00:14:54.489 "nvmf_ns_remove_host", 00:14:54.489 "nvmf_ns_add_host", 00:14:54.489 "nvmf_subsystem_remove_ns", 00:14:54.489 "nvmf_subsystem_set_ns_ana_group", 00:14:54.489 "nvmf_subsystem_add_ns", 00:14:54.489 "nvmf_subsystem_listener_set_ana_state", 00:14:54.489 "nvmf_discovery_get_referrals", 00:14:54.489 "nvmf_discovery_remove_referral", 00:14:54.489 "nvmf_discovery_add_referral", 00:14:54.489 "nvmf_subsystem_remove_listener", 00:14:54.489 "nvmf_subsystem_add_listener", 00:14:54.489 "nvmf_delete_subsystem", 00:14:54.489 "nvmf_create_subsystem", 00:14:54.489 "nvmf_get_subsystems", 00:14:54.489 "env_dpdk_get_mem_stats", 00:14:54.489 "nbd_get_disks", 00:14:54.489 "nbd_stop_disk", 00:14:54.489 "nbd_start_disk", 00:14:54.489 "ublk_recover_disk", 00:14:54.489 "ublk_get_disks", 00:14:54.489 "ublk_stop_disk", 00:14:54.489 "ublk_start_disk", 00:14:54.489 "ublk_destroy_target", 00:14:54.489 "ublk_create_target", 00:14:54.489 "virtio_blk_create_transport", 00:14:54.489 "virtio_blk_get_transports", 00:14:54.489 "vhost_controller_set_coalescing", 00:14:54.489 "vhost_get_controllers", 00:14:54.489 "vhost_delete_controller", 00:14:54.489 "vhost_create_blk_controller", 00:14:54.489 "vhost_scsi_controller_remove_target", 00:14:54.489 "vhost_scsi_controller_add_target", 00:14:54.489 "vhost_start_scsi_controller", 00:14:54.489 "vhost_create_scsi_controller", 00:14:54.489 "thread_set_cpumask", 00:14:54.489 "scheduler_set_options", 00:14:54.489 "framework_get_governor", 00:14:54.489 "framework_get_scheduler", 00:14:54.489 "framework_set_scheduler", 00:14:54.489 "framework_get_reactors", 00:14:54.489 "thread_get_io_channels", 00:14:54.489 "thread_get_pollers", 00:14:54.489 "thread_get_stats", 00:14:54.489 "framework_monitor_context_switch", 00:14:54.489 "spdk_kill_instance", 00:14:54.489 "log_enable_timestamps", 00:14:54.489 "log_get_flags", 00:14:54.489 "log_clear_flag", 00:14:54.489 "log_set_flag", 00:14:54.489 "log_get_level", 00:14:54.489 "log_set_level", 00:14:54.489 "log_get_print_level", 00:14:54.489 "log_set_print_level", 00:14:54.489 "framework_enable_cpumask_locks", 00:14:54.489 "framework_disable_cpumask_locks", 00:14:54.489 "framework_wait_init", 00:14:54.489 "framework_start_init", 00:14:54.489 "scsi_get_devices", 00:14:54.489 "bdev_get_histogram", 00:14:54.489 "bdev_enable_histogram", 00:14:54.489 "bdev_set_qos_limit", 00:14:54.489 "bdev_set_qd_sampling_period", 00:14:54.489 "bdev_get_bdevs", 00:14:54.489 "bdev_reset_iostat", 00:14:54.489 "bdev_get_iostat", 00:14:54.489 "bdev_examine", 00:14:54.489 "bdev_wait_for_examine", 00:14:54.489 "bdev_set_options", 00:14:54.489 "accel_get_stats", 00:14:54.489 "accel_set_options", 00:14:54.489 "accel_set_driver", 00:14:54.489 "accel_crypto_key_destroy", 00:14:54.489 "accel_crypto_keys_get", 00:14:54.489 "accel_crypto_key_create", 00:14:54.489 "accel_assign_opc", 00:14:54.489 "accel_get_module_info", 00:14:54.489 "accel_get_opc_assignments", 00:14:54.489 "vmd_rescan", 00:14:54.489 "vmd_remove_device", 00:14:54.489 "vmd_enable", 00:14:54.489 "sock_get_default_impl", 00:14:54.489 "sock_set_default_impl", 00:14:54.489 "sock_impl_set_options", 00:14:54.489 "sock_impl_get_options", 00:14:54.489 "iobuf_get_stats", 00:14:54.489 "iobuf_set_options", 00:14:54.489 "keyring_get_keys", 00:14:54.489 "framework_get_pci_devices", 00:14:54.489 "framework_get_config", 00:14:54.489 "framework_get_subsystems", 00:14:54.489 "fsdev_set_opts", 00:14:54.489 "fsdev_get_opts", 00:14:54.489 "trace_get_info", 00:14:54.489 "trace_get_tpoint_group_mask", 00:14:54.489 "trace_disable_tpoint_group", 00:14:54.489 "trace_enable_tpoint_group", 00:14:54.489 "trace_clear_tpoint_mask", 00:14:54.489 "trace_set_tpoint_mask", 00:14:54.489 "notify_get_notifications", 00:14:54.489 "notify_get_types", 00:14:54.489 "spdk_get_version", 00:14:54.489 "rpc_get_methods" 00:14:54.489 ] 00:14:54.489 05:25:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:14:54.489 05:25:26 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:54.489 05:25:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:54.751 05:25:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:54.751 05:25:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57100 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57100 ']' 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57100 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57100 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:54.751 killing process with pid 57100 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57100' 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57100 00:14:54.751 05:25:26 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57100 00:14:56.659 00:14:56.659 real 0m3.235s 00:14:56.659 user 0m5.963s 00:14:56.659 sys 0m0.504s 00:14:56.659 05:25:28 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:56.659 ************************************ 00:14:56.659 END TEST spdkcli_tcp 00:14:56.659 ************************************ 00:14:56.659 05:25:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.659 05:25:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:56.659 05:25:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:56.659 05:25:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:56.659 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:14:56.659 ************************************ 00:14:56.659 START TEST dpdk_mem_utility 00:14:56.659 ************************************ 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:56.659 * Looking for test storage... 00:14:56.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.659 05:25:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:56.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.659 --rc genhtml_branch_coverage=1 00:14:56.659 --rc genhtml_function_coverage=1 00:14:56.659 --rc genhtml_legend=1 00:14:56.659 --rc geninfo_all_blocks=1 00:14:56.659 --rc geninfo_unexecuted_blocks=1 00:14:56.659 00:14:56.659 ' 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:56.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.659 --rc genhtml_branch_coverage=1 00:14:56.659 --rc genhtml_function_coverage=1 00:14:56.659 --rc genhtml_legend=1 00:14:56.659 --rc geninfo_all_blocks=1 00:14:56.659 --rc geninfo_unexecuted_blocks=1 00:14:56.659 00:14:56.659 ' 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:56.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.659 --rc genhtml_branch_coverage=1 00:14:56.659 --rc genhtml_function_coverage=1 00:14:56.659 --rc genhtml_legend=1 00:14:56.659 --rc geninfo_all_blocks=1 00:14:56.659 --rc geninfo_unexecuted_blocks=1 00:14:56.659 00:14:56.659 ' 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:56.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.659 --rc genhtml_branch_coverage=1 00:14:56.659 --rc genhtml_function_coverage=1 00:14:56.659 --rc genhtml_legend=1 00:14:56.659 --rc geninfo_all_blocks=1 00:14:56.659 --rc geninfo_unexecuted_blocks=1 00:14:56.659 00:14:56.659 ' 00:14:56.659 05:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:14:56.659 05:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57217 00:14:56.659 05:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:56.659 05:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57217 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57217 ']' 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.659 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:56.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.660 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.660 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:56.660 05:25:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:56.660 [2024-11-20 05:25:28.310393] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:56.660 [2024-11-20 05:25:28.310531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57217 ] 00:14:56.660 [2024-11-20 05:25:28.474044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.919 [2024-11-20 05:25:28.594301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.499 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:57.499 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:14:57.500 05:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:14:57.500 05:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:14:57.500 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.500 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:57.500 { 00:14:57.500 "filename": "/tmp/spdk_mem_dump.txt" 00:14:57.500 } 00:14:57.500 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.500 05:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:14:57.500 DPDK memory size 816.000000 MiB in 1 heap(s) 00:14:57.500 1 heaps totaling size 816.000000 MiB 00:14:57.500 size: 816.000000 MiB heap id: 0 00:14:57.500 end heaps---------- 00:14:57.500 9 mempools totaling size 595.772034 MiB 00:14:57.500 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:14:57.500 size: 158.602051 MiB name: PDU_data_out_Pool 00:14:57.500 size: 92.545471 MiB name: bdev_io_57217 00:14:57.500 size: 50.003479 MiB name: msgpool_57217 00:14:57.500 size: 36.509338 MiB name: fsdev_io_57217 00:14:57.500 size: 21.763794 MiB name: PDU_Pool 00:14:57.500 size: 19.513306 MiB name: SCSI_TASK_Pool 00:14:57.500 size: 4.133484 MiB name: evtpool_57217 00:14:57.500 size: 0.026123 MiB name: Session_Pool 00:14:57.500 end mempools------- 00:14:57.500 6 memzones totaling size 4.142822 MiB 00:14:57.500 size: 1.000366 MiB name: RG_ring_0_57217 00:14:57.500 size: 1.000366 MiB name: RG_ring_1_57217 00:14:57.500 size: 1.000366 MiB name: RG_ring_4_57217 00:14:57.500 size: 1.000366 MiB name: RG_ring_5_57217 00:14:57.500 size: 0.125366 MiB name: RG_ring_2_57217 00:14:57.500 size: 0.015991 MiB name: RG_ring_3_57217 00:14:57.500 end memzones------- 00:14:57.500 05:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:14:57.768 heap id: 0 total size: 816.000000 MiB number of busy elements: 314 number of free elements: 18 00:14:57.769 list of free elements. size: 16.791626 MiB 00:14:57.769 element at address: 0x200006400000 with size: 1.995972 MiB 00:14:57.769 element at address: 0x20000a600000 with size: 1.995972 MiB 00:14:57.769 element at address: 0x200003e00000 with size: 1.991028 MiB 00:14:57.769 element at address: 0x200018d00040 with size: 0.999939 MiB 00:14:57.769 element at address: 0x200019100040 with size: 0.999939 MiB 00:14:57.769 element at address: 0x200019200000 with size: 0.999084 MiB 00:14:57.769 element at address: 0x200031e00000 with size: 0.994324 MiB 00:14:57.769 element at address: 0x200000400000 with size: 0.992004 MiB 00:14:57.769 element at address: 0x200018a00000 with size: 0.959656 MiB 00:14:57.769 element at address: 0x200019500040 with size: 0.936401 MiB 00:14:57.769 element at address: 0x200000200000 with size: 0.716980 MiB 00:14:57.769 element at address: 0x20001ac00000 with size: 0.560730 MiB 00:14:57.769 element at address: 0x200000c00000 with size: 0.490173 MiB 00:14:57.769 element at address: 0x200018e00000 with size: 0.487976 MiB 00:14:57.769 element at address: 0x200019600000 with size: 0.485413 MiB 00:14:57.769 element at address: 0x200012c00000 with size: 0.443481 MiB 00:14:57.769 element at address: 0x200028000000 with size: 0.391663 MiB 00:14:57.769 element at address: 0x200000800000 with size: 0.350891 MiB 00:14:57.769 list of standard malloc elements. size: 199.287476 MiB 00:14:57.769 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:14:57.769 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:14:57.769 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:14:57.769 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:14:57.769 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:14:57.769 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:14:57.769 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:14:57.769 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:14:57.769 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:14:57.769 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:14:57.769 element at address: 0x200012bff040 with size: 0.000305 MiB 00:14:57.769 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:14:57.769 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200000cff000 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:14:57.769 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bff180 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bff280 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bff380 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bff480 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bff580 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bff680 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bff780 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bff880 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bff980 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012c71880 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012c71980 with size: 0.000244 MiB 00:14:57.769 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200012c72080 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200012c72180 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:14:57.770 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:14:57.770 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200028064440 with size: 0.000244 MiB 00:14:57.770 element at address: 0x200028064540 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20002806b200 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20002806b480 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20002806b580 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20002806b680 with size: 0.000244 MiB 00:14:57.770 element at address: 0x20002806b780 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806b880 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806b980 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806be80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c080 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c180 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c280 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c380 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c480 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c580 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c680 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c780 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c880 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806c980 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d080 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d180 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d280 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d380 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d480 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d580 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d680 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d780 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d880 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806d980 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806da80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806db80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806de80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806df80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e080 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e180 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e280 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e380 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e480 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e580 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e680 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e780 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e880 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806e980 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f080 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f180 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f280 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f380 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f480 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f580 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f680 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f780 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f880 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806f980 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:14:57.771 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:14:57.771 list of memzone associated elements. size: 599.920898 MiB 00:14:57.771 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:14:57.771 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:14:57.771 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:14:57.771 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:14:57.771 element at address: 0x200012df4740 with size: 92.045105 MiB 00:14:57.771 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57217_0 00:14:57.771 element at address: 0x200000dff340 with size: 48.003113 MiB 00:14:57.771 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57217_0 00:14:57.771 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:14:57.771 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57217_0 00:14:57.771 element at address: 0x2000197be900 with size: 20.255615 MiB 00:14:57.771 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:14:57.771 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:14:57.771 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:14:57.771 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:14:57.771 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57217_0 00:14:57.771 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:14:57.771 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57217 00:14:57.771 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:14:57.771 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57217 00:14:57.771 element at address: 0x200018efde00 with size: 1.008179 MiB 00:14:57.771 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:14:57.771 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:14:57.771 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:14:57.771 element at address: 0x200018afde00 with size: 1.008179 MiB 00:14:57.771 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:14:57.771 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:14:57.771 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:14:57.771 element at address: 0x200000cff100 with size: 1.000549 MiB 00:14:57.771 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57217 00:14:57.771 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:14:57.771 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57217 00:14:57.771 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:14:57.771 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57217 00:14:57.771 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:14:57.771 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57217 00:14:57.771 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:14:57.771 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57217 00:14:57.771 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:14:57.771 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57217 00:14:57.771 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:14:57.771 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:14:57.771 element at address: 0x200012c72280 with size: 0.500549 MiB 00:14:57.771 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:14:57.771 element at address: 0x20001967c440 with size: 0.250549 MiB 00:14:57.771 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:14:57.771 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:14:57.771 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57217 00:14:57.771 element at address: 0x20000085df80 with size: 0.125549 MiB 00:14:57.771 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57217 00:14:57.771 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:14:57.771 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:14:57.771 element at address: 0x200028064640 with size: 0.023804 MiB 00:14:57.771 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:14:57.771 element at address: 0x200000859d40 with size: 0.016174 MiB 00:14:57.771 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57217 00:14:57.771 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:14:57.771 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:14:57.771 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:14:57.771 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57217 00:14:57.771 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:14:57.771 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57217 00:14:57.771 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:14:57.771 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57217 00:14:57.771 element at address: 0x20002806b300 with size: 0.000366 MiB 00:14:57.771 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:14:57.772 05:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:14:57.772 05:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57217 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57217 ']' 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57217 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57217 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57217' 00:14:57.772 killing process with pid 57217 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57217 00:14:57.772 05:25:29 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57217 00:14:59.684 00:14:59.684 real 0m2.970s 00:14:59.684 user 0m2.971s 00:14:59.684 sys 0m0.465s 00:14:59.684 ************************************ 00:14:59.684 END TEST dpdk_mem_utility 00:14:59.684 ************************************ 00:14:59.684 05:25:31 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:59.684 05:25:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:59.684 05:25:31 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:14:59.684 05:25:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:59.684 05:25:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:59.684 05:25:31 -- common/autotest_common.sh@10 -- # set +x 00:14:59.684 ************************************ 00:14:59.684 START TEST event 00:14:59.684 ************************************ 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:14:59.684 * Looking for test storage... 00:14:59.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1691 -- # lcov --version 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:59.684 05:25:31 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:59.684 05:25:31 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:59.684 05:25:31 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:59.684 05:25:31 event -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.684 05:25:31 event -- scripts/common.sh@336 -- # read -ra ver1 00:14:59.684 05:25:31 event -- scripts/common.sh@337 -- # IFS=.-: 00:14:59.684 05:25:31 event -- scripts/common.sh@337 -- # read -ra ver2 00:14:59.684 05:25:31 event -- scripts/common.sh@338 -- # local 'op=<' 00:14:59.684 05:25:31 event -- scripts/common.sh@340 -- # ver1_l=2 00:14:59.684 05:25:31 event -- scripts/common.sh@341 -- # ver2_l=1 00:14:59.684 05:25:31 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:59.684 05:25:31 event -- scripts/common.sh@344 -- # case "$op" in 00:14:59.684 05:25:31 event -- scripts/common.sh@345 -- # : 1 00:14:59.684 05:25:31 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:59.684 05:25:31 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.684 05:25:31 event -- scripts/common.sh@365 -- # decimal 1 00:14:59.684 05:25:31 event -- scripts/common.sh@353 -- # local d=1 00:14:59.684 05:25:31 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.684 05:25:31 event -- scripts/common.sh@355 -- # echo 1 00:14:59.684 05:25:31 event -- scripts/common.sh@365 -- # ver1[v]=1 00:14:59.684 05:25:31 event -- scripts/common.sh@366 -- # decimal 2 00:14:59.684 05:25:31 event -- scripts/common.sh@353 -- # local d=2 00:14:59.684 05:25:31 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.684 05:25:31 event -- scripts/common.sh@355 -- # echo 2 00:14:59.684 05:25:31 event -- scripts/common.sh@366 -- # ver2[v]=2 00:14:59.684 05:25:31 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:59.684 05:25:31 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:59.684 05:25:31 event -- scripts/common.sh@368 -- # return 0 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:59.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.684 --rc genhtml_branch_coverage=1 00:14:59.684 --rc genhtml_function_coverage=1 00:14:59.684 --rc genhtml_legend=1 00:14:59.684 --rc geninfo_all_blocks=1 00:14:59.684 --rc geninfo_unexecuted_blocks=1 00:14:59.684 00:14:59.684 ' 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:59.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.684 --rc genhtml_branch_coverage=1 00:14:59.684 --rc genhtml_function_coverage=1 00:14:59.684 --rc genhtml_legend=1 00:14:59.684 --rc geninfo_all_blocks=1 00:14:59.684 --rc geninfo_unexecuted_blocks=1 00:14:59.684 00:14:59.684 ' 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:59.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.684 --rc genhtml_branch_coverage=1 00:14:59.684 --rc genhtml_function_coverage=1 00:14:59.684 --rc genhtml_legend=1 00:14:59.684 --rc geninfo_all_blocks=1 00:14:59.684 --rc geninfo_unexecuted_blocks=1 00:14:59.684 00:14:59.684 ' 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:59.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.684 --rc genhtml_branch_coverage=1 00:14:59.684 --rc genhtml_function_coverage=1 00:14:59.684 --rc genhtml_legend=1 00:14:59.684 --rc geninfo_all_blocks=1 00:14:59.684 --rc geninfo_unexecuted_blocks=1 00:14:59.684 00:14:59.684 ' 00:14:59.684 05:25:31 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:59.684 05:25:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:14:59.684 05:25:31 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:14:59.684 05:25:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:59.684 05:25:31 event -- common/autotest_common.sh@10 -- # set +x 00:14:59.684 ************************************ 00:14:59.684 START TEST event_perf 00:14:59.684 ************************************ 00:14:59.684 05:25:31 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:59.684 Running I/O for 1 seconds...[2024-11-20 05:25:31.293247] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:14:59.684 [2024-11-20 05:25:31.293376] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57313 ] 00:14:59.684 [2024-11-20 05:25:31.455428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.946 [2024-11-20 05:25:31.583007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.946 [2024-11-20 05:25:31.583248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.946 [2024-11-20 05:25:31.583483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.946 Running I/O for 1 seconds...[2024-11-20 05:25:31.583484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.327 00:15:01.327 lcore 0: 158506 00:15:01.327 lcore 1: 158501 00:15:01.327 lcore 2: 158503 00:15:01.327 lcore 3: 158504 00:15:01.327 done. 00:15:01.327 00:15:01.327 real 0m1.504s 00:15:01.327 user 0m4.279s 00:15:01.327 sys 0m0.094s 00:15:01.327 05:25:32 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:01.327 05:25:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:15:01.327 ************************************ 00:15:01.327 END TEST event_perf 00:15:01.327 ************************************ 00:15:01.327 05:25:32 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:01.327 05:25:32 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:01.327 05:25:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:01.327 05:25:32 event -- common/autotest_common.sh@10 -- # set +x 00:15:01.327 ************************************ 00:15:01.327 START TEST event_reactor 00:15:01.327 ************************************ 00:15:01.327 05:25:32 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:01.327 [2024-11-20 05:25:32.852548] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:01.327 [2024-11-20 05:25:32.853124] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57348 ] 00:15:01.327 [2024-11-20 05:25:33.012898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.327 [2024-11-20 05:25:33.134816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.707 test_start 00:15:02.707 oneshot 00:15:02.707 tick 100 00:15:02.707 tick 100 00:15:02.707 tick 250 00:15:02.707 tick 100 00:15:02.707 tick 100 00:15:02.707 tick 100 00:15:02.707 tick 250 00:15:02.707 tick 500 00:15:02.707 tick 100 00:15:02.707 tick 100 00:15:02.707 tick 250 00:15:02.707 tick 100 00:15:02.707 tick 100 00:15:02.707 test_end 00:15:02.707 ************************************ 00:15:02.707 END TEST event_reactor 00:15:02.707 ************************************ 00:15:02.707 00:15:02.707 real 0m1.485s 00:15:02.707 user 0m1.299s 00:15:02.707 sys 0m0.074s 00:15:02.707 05:25:34 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:02.707 05:25:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:15:02.707 05:25:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:02.707 05:25:34 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:02.707 05:25:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:02.707 05:25:34 event -- common/autotest_common.sh@10 -- # set +x 00:15:02.707 ************************************ 00:15:02.707 START TEST event_reactor_perf 00:15:02.707 ************************************ 00:15:02.707 05:25:34 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:02.708 [2024-11-20 05:25:34.386052] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:02.708 [2024-11-20 05:25:34.386175] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57390 ] 00:15:02.967 [2024-11-20 05:25:34.549650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.967 [2024-11-20 05:25:34.669598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.395 test_start 00:15:04.395 test_end 00:15:04.395 Performance: 312071 events per second 00:15:04.395 00:15:04.395 real 0m1.484s 00:15:04.395 user 0m1.296s 00:15:04.395 sys 0m0.078s 00:15:04.395 05:25:35 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:04.395 ************************************ 00:15:04.395 END TEST event_reactor_perf 00:15:04.395 ************************************ 00:15:04.395 05:25:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:15:04.395 05:25:35 event -- event/event.sh@49 -- # uname -s 00:15:04.395 05:25:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:15:04.395 05:25:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:04.395 05:25:35 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:04.395 05:25:35 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:04.395 05:25:35 event -- common/autotest_common.sh@10 -- # set +x 00:15:04.395 ************************************ 00:15:04.395 START TEST event_scheduler 00:15:04.395 ************************************ 00:15:04.395 05:25:35 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:04.395 * Looking for test storage... 00:15:04.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:15:04.395 05:25:35 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:04.395 05:25:35 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:15:04.395 05:25:35 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:04.395 05:25:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:15:04.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.395 05:25:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:15:04.395 05:25:36 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:04.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.396 --rc genhtml_branch_coverage=1 00:15:04.396 --rc genhtml_function_coverage=1 00:15:04.396 --rc genhtml_legend=1 00:15:04.396 --rc geninfo_all_blocks=1 00:15:04.396 --rc geninfo_unexecuted_blocks=1 00:15:04.396 00:15:04.396 ' 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:04.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.396 --rc genhtml_branch_coverage=1 00:15:04.396 --rc genhtml_function_coverage=1 00:15:04.396 --rc genhtml_legend=1 00:15:04.396 --rc geninfo_all_blocks=1 00:15:04.396 --rc geninfo_unexecuted_blocks=1 00:15:04.396 00:15:04.396 ' 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:04.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.396 --rc genhtml_branch_coverage=1 00:15:04.396 --rc genhtml_function_coverage=1 00:15:04.396 --rc genhtml_legend=1 00:15:04.396 --rc geninfo_all_blocks=1 00:15:04.396 --rc geninfo_unexecuted_blocks=1 00:15:04.396 00:15:04.396 ' 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:04.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.396 --rc genhtml_branch_coverage=1 00:15:04.396 --rc genhtml_function_coverage=1 00:15:04.396 --rc genhtml_legend=1 00:15:04.396 --rc geninfo_all_blocks=1 00:15:04.396 --rc geninfo_unexecuted_blocks=1 00:15:04.396 00:15:04.396 ' 00:15:04.396 05:25:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:15:04.396 05:25:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57455 00:15:04.396 05:25:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:15:04.396 05:25:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57455 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 57455 ']' 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.396 05:25:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:04.396 05:25:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:04.396 [2024-11-20 05:25:36.121385] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:04.396 [2024-11-20 05:25:36.121727] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57455 ] 00:15:04.657 [2024-11-20 05:25:36.283423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.657 [2024-11-20 05:25:36.436699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.657 [2024-11-20 05:25:36.436982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.657 [2024-11-20 05:25:36.437851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.657 [2024-11-20 05:25:36.437879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.231 05:25:36 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:05.231 05:25:36 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:15:05.231 05:25:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:15:05.231 05:25:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.231 05:25:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:05.231 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:05.231 POWER: Cannot set governor of lcore 0 to userspace 00:15:05.231 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:05.231 POWER: Cannot set governor of lcore 0 to performance 00:15:05.231 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:05.231 POWER: Cannot set governor of lcore 0 to userspace 00:15:05.231 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:05.231 POWER: Cannot set governor of lcore 0 to userspace 00:15:05.231 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:15:05.231 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:15:05.231 POWER: Unable to set Power Management Environment for lcore 0 00:15:05.231 [2024-11-20 05:25:36.903866] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:15:05.231 [2024-11-20 05:25:36.903890] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:15:05.231 [2024-11-20 05:25:36.903899] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:15:05.231 [2024-11-20 05:25:36.903920] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:15:05.231 [2024-11-20 05:25:36.903928] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:15:05.231 [2024-11-20 05:25:36.903937] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:15:05.231 05:25:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.231 05:25:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:15:05.231 05:25:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.231 05:25:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 [2024-11-20 05:25:37.126724] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:15:05.494 05:25:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:15:05.494 05:25:37 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:05.494 05:25:37 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 ************************************ 00:15:05.494 START TEST scheduler_create_thread 00:15:05.494 ************************************ 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 2 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 3 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 4 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 5 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 6 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 7 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 8 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 9 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 10 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.494 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:06.067 ************************************ 00:15:06.067 END TEST scheduler_create_thread 00:15:06.067 ************************************ 00:15:06.067 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.067 00:15:06.067 real 0m0.592s 00:15:06.067 user 0m0.014s 00:15:06.067 sys 0m0.004s 00:15:06.067 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:06.067 05:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:06.067 05:25:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:06.067 05:25:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57455 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 57455 ']' 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 57455 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57455 00:15:06.067 killing process with pid 57455 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57455' 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 57455 00:15:06.067 05:25:37 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 57455 00:15:06.639 [2024-11-20 05:25:38.211722] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:15:07.216 ************************************ 00:15:07.216 END TEST event_scheduler 00:15:07.216 ************************************ 00:15:07.216 00:15:07.216 real 0m3.055s 00:15:07.216 user 0m5.460s 00:15:07.216 sys 0m0.379s 00:15:07.216 05:25:38 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:07.216 05:25:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:07.216 05:25:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:15:07.216 05:25:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:15:07.216 05:25:38 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:07.216 05:25:38 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:07.216 05:25:38 event -- common/autotest_common.sh@10 -- # set +x 00:15:07.216 ************************************ 00:15:07.216 START TEST app_repeat 00:15:07.216 ************************************ 00:15:07.216 05:25:38 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:15:07.216 Process app_repeat pid: 57539 00:15:07.216 spdk_app_start Round 0 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57539 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57539' 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:15:07.216 05:25:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57539 /var/tmp/spdk-nbd.sock 00:15:07.216 05:25:38 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57539 ']' 00:15:07.216 05:25:38 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:07.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:07.216 05:25:38 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:07.216 05:25:38 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:07.216 05:25:38 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:07.216 05:25:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:07.216 [2024-11-20 05:25:39.025513] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:07.216 [2024-11-20 05:25:39.025612] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57539 ] 00:15:07.476 [2024-11-20 05:25:39.182505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:07.476 [2024-11-20 05:25:39.289748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.476 [2024-11-20 05:25:39.290078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.046 05:25:39 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:08.046 05:25:39 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:15:08.046 05:25:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:08.306 Malloc0 00:15:08.306 05:25:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:08.566 Malloc1 00:15:08.566 05:25:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:08.566 05:25:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:08.826 /dev/nbd0 00:15:08.826 05:25:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:08.826 05:25:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:08.826 1+0 records in 00:15:08.826 1+0 records out 00:15:08.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293998 s, 13.9 MB/s 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:08.826 05:25:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:15:08.826 05:25:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.826 05:25:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:08.826 05:25:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:09.085 /dev/nbd1 00:15:09.085 05:25:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:09.085 05:25:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:09.085 1+0 records in 00:15:09.085 1+0 records out 00:15:09.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326217 s, 12.6 MB/s 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:09.085 05:25:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:15:09.085 05:25:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.085 05:25:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:09.085 05:25:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:09.085 05:25:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.085 05:25:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:09.425 05:25:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:09.425 { 00:15:09.425 "nbd_device": "/dev/nbd0", 00:15:09.425 "bdev_name": "Malloc0" 00:15:09.425 }, 00:15:09.425 { 00:15:09.425 "nbd_device": "/dev/nbd1", 00:15:09.425 "bdev_name": "Malloc1" 00:15:09.425 } 00:15:09.425 ]' 00:15:09.425 05:25:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:09.426 05:25:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:09.426 { 00:15:09.426 "nbd_device": "/dev/nbd0", 00:15:09.426 "bdev_name": "Malloc0" 00:15:09.426 }, 00:15:09.426 { 00:15:09.426 "nbd_device": "/dev/nbd1", 00:15:09.426 "bdev_name": "Malloc1" 00:15:09.426 } 00:15:09.426 ]' 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:09.426 /dev/nbd1' 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:09.426 /dev/nbd1' 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:09.426 256+0 records in 00:15:09.426 256+0 records out 00:15:09.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0097644 s, 107 MB/s 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:09.426 256+0 records in 00:15:09.426 256+0 records out 00:15:09.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230181 s, 45.6 MB/s 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:09.426 256+0 records in 00:15:09.426 256+0 records out 00:15:09.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252531 s, 41.5 MB/s 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.426 05:25:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.689 05:25:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:09.950 05:25:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:09.950 05:25:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:09.950 05:25:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:10.211 05:25:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:10.211 05:25:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:10.211 05:25:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:10.211 05:25:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:10.211 05:25:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:10.211 05:25:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:10.211 05:25:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:10.211 05:25:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:10.211 05:25:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:10.211 05:25:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:10.472 05:25:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:11.416 [2024-11-20 05:25:42.906111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:11.416 [2024-11-20 05:25:43.022056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.416 [2024-11-20 05:25:43.022095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.416 [2024-11-20 05:25:43.157039] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:11.416 [2024-11-20 05:25:43.157142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:13.379 spdk_app_start Round 1 00:15:13.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:13.379 05:25:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:13.379 05:25:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:15:13.379 05:25:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57539 /var/tmp/spdk-nbd.sock 00:15:13.379 05:25:45 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57539 ']' 00:15:13.379 05:25:45 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:13.379 05:25:45 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:13.379 05:25:45 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:13.379 05:25:45 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:13.379 05:25:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:13.639 05:25:45 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.639 05:25:45 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:15:13.639 05:25:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:13.898 Malloc0 00:15:13.898 05:25:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:14.185 Malloc1 00:15:14.185 05:25:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.185 05:25:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:14.445 /dev/nbd0 00:15:14.445 05:25:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:14.445 05:25:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:14.445 1+0 records in 00:15:14.445 1+0 records out 00:15:14.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310506 s, 13.2 MB/s 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:14.445 05:25:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:15:14.445 05:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.445 05:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.445 05:25:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:14.706 /dev/nbd1 00:15:14.706 05:25:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:14.706 05:25:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:14.706 1+0 records in 00:15:14.706 1+0 records out 00:15:14.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331232 s, 12.4 MB/s 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:14.706 05:25:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:15:14.706 05:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.706 05:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.706 05:25:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:14.706 05:25:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.706 05:25:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:14.706 05:25:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:14.706 { 00:15:14.706 "nbd_device": "/dev/nbd0", 00:15:14.706 "bdev_name": "Malloc0" 00:15:14.706 }, 00:15:14.706 { 00:15:14.706 "nbd_device": "/dev/nbd1", 00:15:14.706 "bdev_name": "Malloc1" 00:15:14.706 } 00:15:14.706 ]' 00:15:14.706 05:25:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:14.706 { 00:15:14.706 "nbd_device": "/dev/nbd0", 00:15:14.706 "bdev_name": "Malloc0" 00:15:14.706 }, 00:15:14.706 { 00:15:14.706 "nbd_device": "/dev/nbd1", 00:15:14.706 "bdev_name": "Malloc1" 00:15:14.706 } 00:15:14.706 ]' 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:14.967 /dev/nbd1' 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:14.967 /dev/nbd1' 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:14.967 256+0 records in 00:15:14.967 256+0 records out 00:15:14.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618182 s, 170 MB/s 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:14.967 256+0 records in 00:15:14.967 256+0 records out 00:15:14.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215499 s, 48.7 MB/s 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:14.967 256+0 records in 00:15:14.967 256+0 records out 00:15:14.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201537 s, 52.0 MB/s 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.967 05:25:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.227 05:25:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.227 05:25:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:15.488 05:25:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:15.488 05:25:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:16.058 05:25:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:16.648 [2024-11-20 05:25:48.365907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:16.905 [2024-11-20 05:25:48.482071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.905 [2024-11-20 05:25:48.482186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.905 [2024-11-20 05:25:48.617801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:16.905 [2024-11-20 05:25:48.617871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:18.813 spdk_app_start Round 2 00:15:18.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:18.813 05:25:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:18.813 05:25:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:15:18.813 05:25:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57539 /var/tmp/spdk-nbd.sock 00:15:18.813 05:25:50 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57539 ']' 00:15:18.813 05:25:50 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:18.813 05:25:50 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:18.813 05:25:50 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:18.813 05:25:50 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:18.813 05:25:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:19.071 05:25:50 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:19.071 05:25:50 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:15:19.071 05:25:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:19.328 Malloc0 00:15:19.328 05:25:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:19.586 Malloc1 00:15:19.586 05:25:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.586 05:25:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:19.848 /dev/nbd0 00:15:19.848 05:25:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.848 05:25:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:19.848 1+0 records in 00:15:19.848 1+0 records out 00:15:19.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069656 s, 5.9 MB/s 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:19.848 05:25:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:19.849 05:25:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:15:19.849 05:25:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.849 05:25:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.849 05:25:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:20.107 /dev/nbd1 00:15:20.107 05:25:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:20.107 05:25:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:20.107 1+0 records in 00:15:20.107 1+0 records out 00:15:20.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240071 s, 17.1 MB/s 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:20.107 05:25:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:15:20.107 05:25:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.107 05:25:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:20.107 05:25:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:20.107 05:25:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.107 05:25:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:20.365 { 00:15:20.365 "nbd_device": "/dev/nbd0", 00:15:20.365 "bdev_name": "Malloc0" 00:15:20.365 }, 00:15:20.365 { 00:15:20.365 "nbd_device": "/dev/nbd1", 00:15:20.365 "bdev_name": "Malloc1" 00:15:20.365 } 00:15:20.365 ]' 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:20.365 { 00:15:20.365 "nbd_device": "/dev/nbd0", 00:15:20.365 "bdev_name": "Malloc0" 00:15:20.365 }, 00:15:20.365 { 00:15:20.365 "nbd_device": "/dev/nbd1", 00:15:20.365 "bdev_name": "Malloc1" 00:15:20.365 } 00:15:20.365 ]' 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:20.365 /dev/nbd1' 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:20.365 /dev/nbd1' 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:20.365 05:25:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:20.624 256+0 records in 00:15:20.624 256+0 records out 00:15:20.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709171 s, 148 MB/s 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:20.624 256+0 records in 00:15:20.624 256+0 records out 00:15:20.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018981 s, 55.2 MB/s 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:20.624 256+0 records in 00:15:20.624 256+0 records out 00:15:20.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195046 s, 53.8 MB/s 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.624 05:25:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.883 05:25:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:21.142 05:25:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:21.142 05:25:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:21.708 05:25:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:22.274 [2024-11-20 05:25:53.878297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:22.274 [2024-11-20 05:25:53.978226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.274 [2024-11-20 05:25:53.978235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.274 [2024-11-20 05:25:54.095226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:22.274 [2024-11-20 05:25:54.095314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:24.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:24.830 05:25:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57539 /var/tmp/spdk-nbd.sock 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57539 ']' 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:15:24.830 05:25:56 event.app_repeat -- event/event.sh@39 -- # killprocess 57539 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 57539 ']' 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 57539 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57539 00:15:24.830 killing process with pid 57539 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57539' 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@971 -- # kill 57539 00:15:24.830 05:25:56 event.app_repeat -- common/autotest_common.sh@976 -- # wait 57539 00:15:25.395 spdk_app_start is called in Round 0. 00:15:25.395 Shutdown signal received, stop current app iteration 00:15:25.395 Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 reinitialization... 00:15:25.395 spdk_app_start is called in Round 1. 00:15:25.395 Shutdown signal received, stop current app iteration 00:15:25.395 Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 reinitialization... 00:15:25.395 spdk_app_start is called in Round 2. 00:15:25.395 Shutdown signal received, stop current app iteration 00:15:25.395 Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 reinitialization... 00:15:25.395 spdk_app_start is called in Round 3. 00:15:25.395 Shutdown signal received, stop current app iteration 00:15:25.395 ************************************ 00:15:25.395 END TEST app_repeat 00:15:25.395 ************************************ 00:15:25.395 05:25:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:15:25.395 05:25:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:15:25.395 00:15:25.395 real 0m18.129s 00:15:25.395 user 0m39.550s 00:15:25.395 sys 0m2.174s 00:15:25.395 05:25:57 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:25.395 05:25:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:25.395 05:25:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:15:25.395 05:25:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:25.395 05:25:57 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:25.395 05:25:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:25.395 05:25:57 event -- common/autotest_common.sh@10 -- # set +x 00:15:25.395 ************************************ 00:15:25.395 START TEST cpu_locks 00:15:25.395 ************************************ 00:15:25.395 05:25:57 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:25.395 * Looking for test storage... 00:15:25.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:25.395 05:25:57 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:25.395 05:25:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:15:25.395 05:25:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:25.654 05:25:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:15:25.654 05:25:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:15:25.655 05:25:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.655 05:25:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:15:25.655 05:25:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.655 05:25:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.655 05:25:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.655 05:25:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:15:25.655 05:25:57 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.655 05:25:57 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:25.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.655 --rc genhtml_branch_coverage=1 00:15:25.655 --rc genhtml_function_coverage=1 00:15:25.655 --rc genhtml_legend=1 00:15:25.655 --rc geninfo_all_blocks=1 00:15:25.655 --rc geninfo_unexecuted_blocks=1 00:15:25.655 00:15:25.655 ' 00:15:25.655 05:25:57 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:25.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.655 --rc genhtml_branch_coverage=1 00:15:25.655 --rc genhtml_function_coverage=1 00:15:25.655 --rc genhtml_legend=1 00:15:25.655 --rc geninfo_all_blocks=1 00:15:25.655 --rc geninfo_unexecuted_blocks=1 00:15:25.655 00:15:25.655 ' 00:15:25.655 05:25:57 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:25.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.655 --rc genhtml_branch_coverage=1 00:15:25.655 --rc genhtml_function_coverage=1 00:15:25.655 --rc genhtml_legend=1 00:15:25.655 --rc geninfo_all_blocks=1 00:15:25.655 --rc geninfo_unexecuted_blocks=1 00:15:25.655 00:15:25.655 ' 00:15:25.655 05:25:57 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:25.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.655 --rc genhtml_branch_coverage=1 00:15:25.655 --rc genhtml_function_coverage=1 00:15:25.655 --rc genhtml_legend=1 00:15:25.655 --rc geninfo_all_blocks=1 00:15:25.655 --rc geninfo_unexecuted_blocks=1 00:15:25.655 00:15:25.655 ' 00:15:25.655 05:25:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:15:25.655 05:25:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:15:25.655 05:25:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:15:25.655 05:25:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:15:25.655 05:25:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:25.655 05:25:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:25.655 05:25:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:25.655 ************************************ 00:15:25.655 START TEST default_locks 00:15:25.655 ************************************ 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57975 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57975 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 57975 ']' 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:25.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:25.655 05:25:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:25.655 [2024-11-20 05:25:57.378847] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:25.655 [2024-11-20 05:25:57.379150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57975 ] 00:15:25.913 [2024-11-20 05:25:57.535184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.913 [2024-11-20 05:25:57.642089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.478 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:26.478 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:15:26.478 05:25:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57975 00:15:26.479 05:25:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:26.479 05:25:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57975 00:15:26.736 05:25:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57975 00:15:26.736 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 57975 ']' 00:15:26.736 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 57975 00:15:26.736 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:15:26.736 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:26.737 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57975 00:15:26.737 killing process with pid 57975 00:15:26.737 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:26.737 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:26.737 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57975' 00:15:26.737 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 57975 00:15:26.737 05:25:58 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 57975 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57975 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57975 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:15:28.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 57975 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 57975 ']' 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:28.152 ERROR: process (pid: 57975) is no longer running 00:15:28.152 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (57975) - No such process 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:28.152 ************************************ 00:15:28.152 END TEST default_locks 00:15:28.152 ************************************ 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:28.152 00:15:28.152 real 0m2.449s 00:15:28.152 user 0m2.416s 00:15:28.152 sys 0m0.493s 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:28.152 05:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:28.152 05:25:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:15:28.152 05:25:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:28.152 05:25:59 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:28.152 05:25:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:28.152 ************************************ 00:15:28.152 START TEST default_locks_via_rpc 00:15:28.152 ************************************ 00:15:28.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58034 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58034 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58034 ']' 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.152 05:25:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.152 [2024-11-20 05:25:59.858538] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:28.152 [2024-11-20 05:25:59.858856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58034 ] 00:15:28.410 [2024-11-20 05:26:00.014786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.410 [2024-11-20 05:26:00.121946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58034 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58034 00:15:28.977 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58034 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58034 ']' 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58034 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58034 00:15:29.235 killing process with pid 58034 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58034' 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58034 00:15:29.235 05:26:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58034 00:15:30.607 ************************************ 00:15:30.607 END TEST default_locks_via_rpc 00:15:30.607 ************************************ 00:15:30.607 00:15:30.607 real 0m2.459s 00:15:30.607 user 0m2.483s 00:15:30.607 sys 0m0.474s 00:15:30.607 05:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:30.607 05:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.607 05:26:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:15:30.607 05:26:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:30.607 05:26:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:30.607 05:26:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:30.607 ************************************ 00:15:30.607 START TEST non_locking_app_on_locked_coremask 00:15:30.607 ************************************ 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58091 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58091 /var/tmp/spdk.sock 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58091 ']' 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:30.607 05:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:30.607 [2024-11-20 05:26:02.382780] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:30.607 [2024-11-20 05:26:02.383170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58091 ] 00:15:30.864 [2024-11-20 05:26:02.551944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.864 [2024-11-20 05:26:02.671881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58107 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58107 /var/tmp/spdk2.sock 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58107 ']' 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:31.798 05:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:31.798 [2024-11-20 05:26:03.421181] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:31.798 [2024-11-20 05:26:03.421391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58107 ] 00:15:31.798 [2024-11-20 05:26:03.613960] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:31.798 [2024-11-20 05:26:03.614042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.056 [2024-11-20 05:26:03.854074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.430 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:33.431 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:15:33.431 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58091 00:15:33.431 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:33.431 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58091 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58091 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58091 ']' 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58091 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58091 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:33.689 killing process with pid 58091 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58091' 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58091 00:15:33.689 05:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58091 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58107 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58107 ']' 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58107 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58107 00:15:36.969 killing process with pid 58107 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58107' 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58107 00:15:36.969 05:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58107 00:15:37.903 00:15:37.903 real 0m7.213s 00:15:37.903 user 0m7.390s 00:15:37.903 sys 0m0.969s 00:15:37.903 05:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:37.903 ************************************ 00:15:37.903 END TEST non_locking_app_on_locked_coremask 00:15:37.903 ************************************ 00:15:37.903 05:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:37.903 05:26:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:15:37.903 05:26:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:37.903 05:26:09 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:37.903 05:26:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:37.903 ************************************ 00:15:37.903 START TEST locking_app_on_unlocked_coremask 00:15:37.903 ************************************ 00:15:37.903 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:15:37.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.903 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58209 00:15:37.903 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58209 /var/tmp/spdk.sock 00:15:37.903 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58209 ']' 00:15:37.903 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.903 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:37.904 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.904 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:37.904 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:37.904 05:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:15:37.904 [2024-11-20 05:26:09.611133] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:37.904 [2024-11-20 05:26:09.611460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58209 ] 00:15:38.161 [2024-11-20 05:26:09.766983] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:38.161 [2024-11-20 05:26:09.767039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.161 [2024-11-20 05:26:09.872055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58225 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58225 /var/tmp/spdk2.sock 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58225 ']' 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:38.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:38.730 05:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:38.730 [2024-11-20 05:26:10.515021] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:38.730 [2024-11-20 05:26:10.515743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58225 ] 00:15:38.988 [2024-11-20 05:26:10.683250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.245 [2024-11-20 05:26:10.890519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.176 05:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:40.176 05:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:15:40.176 05:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58225 00:15:40.176 05:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58225 00:15:40.176 05:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:40.740 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58209 00:15:40.740 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58209 ']' 00:15:40.740 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58209 00:15:40.740 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:15:40.740 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:40.740 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58209 00:15:40.741 killing process with pid 58209 00:15:40.741 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:40.741 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:40.741 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58209' 00:15:40.741 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58209 00:15:40.741 05:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58209 00:15:43.366 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58225 00:15:43.366 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58225 ']' 00:15:43.366 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58225 00:15:43.366 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:15:43.367 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:43.367 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58225 00:15:43.367 killing process with pid 58225 00:15:43.367 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:43.367 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:43.367 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58225' 00:15:43.367 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58225 00:15:43.367 05:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58225 00:15:44.742 ************************************ 00:15:44.742 END TEST locking_app_on_unlocked_coremask 00:15:44.742 ************************************ 00:15:44.742 00:15:44.742 real 0m6.807s 00:15:44.742 user 0m6.914s 00:15:44.742 sys 0m0.992s 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:44.742 05:26:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:15:44.742 05:26:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:44.742 05:26:16 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:44.742 05:26:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:44.742 ************************************ 00:15:44.742 START TEST locking_app_on_locked_coremask 00:15:44.742 ************************************ 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58322 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58322 /var/tmp/spdk.sock 00:15:44.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58322 ']' 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:44.742 05:26:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:44.742 [2024-11-20 05:26:16.454473] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:44.742 [2024-11-20 05:26:16.454594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58322 ] 00:15:45.000 [2024-11-20 05:26:16.610294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.000 [2024-11-20 05:26:16.716342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.565 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:45.565 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:15:45.565 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58338 00:15:45.565 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:45.565 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58338 /var/tmp/spdk2.sock 00:15:45.565 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58338 /var/tmp/spdk2.sock 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58338 /var/tmp/spdk2.sock 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58338 ']' 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:45.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:45.566 05:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:45.566 [2024-11-20 05:26:17.372650] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:45.566 [2024-11-20 05:26:17.372991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58338 ] 00:15:45.825 [2024-11-20 05:26:17.538088] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58322 has claimed it. 00:15:45.825 [2024-11-20 05:26:17.538164] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:46.390 ERROR: process (pid: 58338) is no longer running 00:15:46.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58338) - No such process 00:15:46.390 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:46.390 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:15:46.390 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:15:46.390 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58322 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58322 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58322 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58322 ']' 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58322 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58322 00:15:46.391 killing process with pid 58322 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58322' 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58322 00:15:46.391 05:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58322 00:15:47.763 ************************************ 00:15:47.763 END TEST locking_app_on_locked_coremask 00:15:47.763 ************************************ 00:15:47.763 00:15:47.763 real 0m3.149s 00:15:47.763 user 0m3.288s 00:15:47.763 sys 0m0.621s 00:15:47.763 05:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:47.763 05:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:47.763 05:26:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:15:47.763 05:26:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:47.763 05:26:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:47.763 05:26:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:47.763 ************************************ 00:15:47.763 START TEST locking_overlapped_coremask 00:15:47.763 ************************************ 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58391 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58391 /var/tmp/spdk.sock 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58391 ']' 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:47.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:47.763 05:26:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:48.020 [2024-11-20 05:26:19.639895] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:48.021 [2024-11-20 05:26:19.640032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58391 ] 00:15:48.021 [2024-11-20 05:26:19.792968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.278 [2024-11-20 05:26:19.899804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.278 [2024-11-20 05:26:19.899940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.278 [2024-11-20 05:26:19.900101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.844 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:48.844 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:15:48.844 05:26:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58409 00:15:48.844 05:26:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58409 /var/tmp/spdk2.sock 00:15:48.844 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:15:48.844 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58409 /var/tmp/spdk2.sock 00:15:48.844 05:26:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58409 /var/tmp/spdk2.sock 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58409 ']' 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:48.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:48.845 05:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:48.845 [2024-11-20 05:26:20.538302] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:48.845 [2024-11-20 05:26:20.538611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58409 ] 00:15:49.103 [2024-11-20 05:26:20.709775] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58391 has claimed it. 00:15:49.103 [2024-11-20 05:26:20.709851] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:49.362 ERROR: process (pid: 58409) is no longer running 00:15:49.362 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58409) - No such process 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58391 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58391 ']' 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58391 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58391 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58391' 00:15:49.362 killing process with pid 58391 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58391 00:15:49.362 05:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58391 00:15:50.736 00:15:50.736 real 0m2.892s 00:15:50.736 user 0m7.678s 00:15:50.736 sys 0m0.488s 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:50.736 ************************************ 00:15:50.736 END TEST locking_overlapped_coremask 00:15:50.736 ************************************ 00:15:50.736 05:26:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:15:50.736 05:26:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:50.736 05:26:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:50.736 05:26:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:50.736 ************************************ 00:15:50.736 START TEST locking_overlapped_coremask_via_rpc 00:15:50.736 ************************************ 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:15:50.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58462 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58462 /var/tmp/spdk.sock 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58462 ']' 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:50.736 05:26:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.994 [2024-11-20 05:26:22.603283] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:50.995 [2024-11-20 05:26:22.603694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58462 ] 00:15:50.995 [2024-11-20 05:26:22.770387] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:50.995 [2024-11-20 05:26:22.770446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:51.252 [2024-11-20 05:26:22.879168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.252 [2024-11-20 05:26:22.879344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.252 [2024-11-20 05:26:22.879357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58480 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58480 /var/tmp/spdk2.sock 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58480 ']' 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:51.817 05:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.817 [2024-11-20 05:26:23.501669] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:51.817 [2024-11-20 05:26:23.501978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58480 ] 00:15:52.074 [2024-11-20 05:26:23.663710] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:52.074 [2024-11-20 05:26:23.663791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.074 [2024-11-20 05:26:23.884621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.074 [2024-11-20 05:26:23.884708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.074 [2024-11-20 05:26:23.884723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.603 [2024-11-20 05:26:26.063584] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58462 has claimed it. 00:15:54.603 request: 00:15:54.603 { 00:15:54.603 "method": "framework_enable_cpumask_locks", 00:15:54.603 "req_id": 1 00:15:54.603 } 00:15:54.603 Got JSON-RPC error response 00:15:54.603 response: 00:15:54.603 { 00:15:54.603 "code": -32603, 00:15:54.603 "message": "Failed to claim CPU core: 2" 00:15:54.603 } 00:15:54.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58462 /var/tmp/spdk.sock 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58462 ']' 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58480 /var/tmp/spdk2.sock 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58480 ']' 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:54.603 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.861 ************************************ 00:15:54.861 END TEST locking_overlapped_coremask_via_rpc 00:15:54.861 ************************************ 00:15:54.861 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:54.861 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:15:54.861 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:15:54.861 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:54.861 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:54.861 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:54.861 00:15:54.861 real 0m4.004s 00:15:54.861 user 0m1.234s 00:15:54.861 sys 0m0.166s 00:15:54.861 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:54.861 05:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.861 05:26:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:15:54.862 05:26:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58462 ]] 00:15:54.862 05:26:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58462 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58462 ']' 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58462 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58462 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58462' 00:15:54.862 killing process with pid 58462 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58462 00:15:54.862 05:26:26 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58462 00:15:56.235 05:26:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58480 ]] 00:15:56.235 05:26:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58480 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58480 ']' 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58480 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58480 00:15:56.235 killing process with pid 58480 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58480' 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58480 00:15:56.235 05:26:27 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58480 00:15:57.604 05:26:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:57.604 05:26:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:15:57.604 05:26:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58462 ]] 00:15:57.604 05:26:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58462 00:15:57.604 Process with pid 58462 is not found 00:15:57.604 Process with pid 58480 is not found 00:15:57.604 05:26:29 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58462 ']' 00:15:57.604 05:26:29 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58462 00:15:57.604 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58462) - No such process 00:15:57.604 05:26:29 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58462 is not found' 00:15:57.604 05:26:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58480 ]] 00:15:57.604 05:26:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58480 00:15:57.604 05:26:29 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58480 ']' 00:15:57.604 05:26:29 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58480 00:15:57.604 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58480) - No such process 00:15:57.604 05:26:29 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58480 is not found' 00:15:57.604 05:26:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:57.604 ************************************ 00:15:57.604 END TEST cpu_locks 00:15:57.604 ************************************ 00:15:57.604 00:15:57.604 real 0m32.146s 00:15:57.604 user 0m57.842s 00:15:57.604 sys 0m5.119s 00:15:57.604 05:26:29 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:57.604 05:26:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:57.604 ************************************ 00:15:57.604 END TEST event 00:15:57.604 ************************************ 00:15:57.604 00:15:57.604 real 0m58.226s 00:15:57.604 user 1m49.897s 00:15:57.604 sys 0m8.141s 00:15:57.604 05:26:29 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:57.604 05:26:29 event -- common/autotest_common.sh@10 -- # set +x 00:15:57.604 05:26:29 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:57.604 05:26:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:57.604 05:26:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:57.604 05:26:29 -- common/autotest_common.sh@10 -- # set +x 00:15:57.604 ************************************ 00:15:57.604 START TEST thread 00:15:57.604 ************************************ 00:15:57.604 05:26:29 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:57.604 * Looking for test storage... 00:15:57.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:15:57.604 05:26:29 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:57.604 05:26:29 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:15:57.604 05:26:29 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:57.861 05:26:29 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:57.861 05:26:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.861 05:26:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.861 05:26:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.861 05:26:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.861 05:26:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.861 05:26:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.861 05:26:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.861 05:26:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.861 05:26:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.861 05:26:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.861 05:26:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.861 05:26:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:15:57.861 05:26:29 thread -- scripts/common.sh@345 -- # : 1 00:15:57.862 05:26:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.862 05:26:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.862 05:26:29 thread -- scripts/common.sh@365 -- # decimal 1 00:15:57.862 05:26:29 thread -- scripts/common.sh@353 -- # local d=1 00:15:57.862 05:26:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.862 05:26:29 thread -- scripts/common.sh@355 -- # echo 1 00:15:57.862 05:26:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.862 05:26:29 thread -- scripts/common.sh@366 -- # decimal 2 00:15:57.862 05:26:29 thread -- scripts/common.sh@353 -- # local d=2 00:15:57.862 05:26:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.862 05:26:29 thread -- scripts/common.sh@355 -- # echo 2 00:15:57.862 05:26:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.862 05:26:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.862 05:26:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.862 05:26:29 thread -- scripts/common.sh@368 -- # return 0 00:15:57.862 05:26:29 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.862 05:26:29 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:57.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.862 --rc genhtml_branch_coverage=1 00:15:57.862 --rc genhtml_function_coverage=1 00:15:57.862 --rc genhtml_legend=1 00:15:57.862 --rc geninfo_all_blocks=1 00:15:57.862 --rc geninfo_unexecuted_blocks=1 00:15:57.862 00:15:57.862 ' 00:15:57.862 05:26:29 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:57.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.862 --rc genhtml_branch_coverage=1 00:15:57.862 --rc genhtml_function_coverage=1 00:15:57.862 --rc genhtml_legend=1 00:15:57.862 --rc geninfo_all_blocks=1 00:15:57.862 --rc geninfo_unexecuted_blocks=1 00:15:57.862 00:15:57.862 ' 00:15:57.862 05:26:29 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:57.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.862 --rc genhtml_branch_coverage=1 00:15:57.862 --rc genhtml_function_coverage=1 00:15:57.862 --rc genhtml_legend=1 00:15:57.862 --rc geninfo_all_blocks=1 00:15:57.862 --rc geninfo_unexecuted_blocks=1 00:15:57.862 00:15:57.862 ' 00:15:57.862 05:26:29 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:57.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.862 --rc genhtml_branch_coverage=1 00:15:57.862 --rc genhtml_function_coverage=1 00:15:57.862 --rc genhtml_legend=1 00:15:57.862 --rc geninfo_all_blocks=1 00:15:57.862 --rc geninfo_unexecuted_blocks=1 00:15:57.862 00:15:57.862 ' 00:15:57.862 05:26:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:57.862 05:26:29 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:15:57.862 05:26:29 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:57.862 05:26:29 thread -- common/autotest_common.sh@10 -- # set +x 00:15:57.862 ************************************ 00:15:57.862 START TEST thread_poller_perf 00:15:57.862 ************************************ 00:15:57.862 05:26:29 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:57.862 [2024-11-20 05:26:29.550068] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:57.862 [2024-11-20 05:26:29.550414] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58653 ] 00:15:58.119 [2024-11-20 05:26:29.719241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.119 [2024-11-20 05:26:29.839419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.119 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:15:59.490 [2024-11-20T05:26:31.325Z] ====================================== 00:15:59.490 [2024-11-20T05:26:31.325Z] busy:2609789720 (cyc) 00:15:59.490 [2024-11-20T05:26:31.325Z] total_run_count: 304000 00:15:59.490 [2024-11-20T05:26:31.325Z] tsc_hz: 2600000000 (cyc) 00:15:59.490 [2024-11-20T05:26:31.325Z] ====================================== 00:15:59.490 [2024-11-20T05:26:31.325Z] poller_cost: 8584 (cyc), 3301 (nsec) 00:15:59.490 00:15:59.490 real 0m1.501s 00:15:59.490 ************************************ 00:15:59.490 END TEST thread_poller_perf 00:15:59.490 ************************************ 00:15:59.490 user 0m1.317s 00:15:59.490 sys 0m0.075s 00:15:59.490 05:26:31 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:59.490 05:26:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:59.490 05:26:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:59.490 05:26:31 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:15:59.490 05:26:31 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:59.490 05:26:31 thread -- common/autotest_common.sh@10 -- # set +x 00:15:59.490 ************************************ 00:15:59.490 START TEST thread_poller_perf 00:15:59.490 ************************************ 00:15:59.490 05:26:31 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:59.490 [2024-11-20 05:26:31.090490] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:15:59.490 [2024-11-20 05:26:31.090615] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58684 ] 00:15:59.490 [2024-11-20 05:26:31.249999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.747 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:15:59.747 [2024-11-20 05:26:31.370569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.119 [2024-11-20T05:26:32.954Z] ====================================== 00:16:01.119 [2024-11-20T05:26:32.954Z] busy:2603219516 (cyc) 00:16:01.119 [2024-11-20T05:26:32.954Z] total_run_count: 3934000 00:16:01.119 [2024-11-20T05:26:32.954Z] tsc_hz: 2600000000 (cyc) 00:16:01.119 [2024-11-20T05:26:32.954Z] ====================================== 00:16:01.119 [2024-11-20T05:26:32.954Z] poller_cost: 661 (cyc), 254 (nsec) 00:16:01.119 00:16:01.119 real 0m1.476s 00:16:01.119 user 0m1.300s 00:16:01.119 sys 0m0.069s 00:16:01.119 05:26:32 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:01.119 ************************************ 00:16:01.119 END TEST thread_poller_perf 00:16:01.119 ************************************ 00:16:01.119 05:26:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:01.119 05:26:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:16:01.119 ************************************ 00:16:01.119 END TEST thread 00:16:01.119 ************************************ 00:16:01.119 00:16:01.119 real 0m3.211s 00:16:01.119 user 0m2.736s 00:16:01.119 sys 0m0.261s 00:16:01.119 05:26:32 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:01.119 05:26:32 thread -- common/autotest_common.sh@10 -- # set +x 00:16:01.119 05:26:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:16:01.119 05:26:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:01.119 05:26:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:01.119 05:26:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:01.119 05:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:01.119 ************************************ 00:16:01.119 START TEST app_cmdline 00:16:01.119 ************************************ 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:01.119 * Looking for test storage... 00:16:01.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.119 05:26:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:01.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.119 --rc genhtml_branch_coverage=1 00:16:01.119 --rc genhtml_function_coverage=1 00:16:01.119 --rc genhtml_legend=1 00:16:01.119 --rc geninfo_all_blocks=1 00:16:01.119 --rc geninfo_unexecuted_blocks=1 00:16:01.119 00:16:01.119 ' 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:01.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.119 --rc genhtml_branch_coverage=1 00:16:01.119 --rc genhtml_function_coverage=1 00:16:01.119 --rc genhtml_legend=1 00:16:01.119 --rc geninfo_all_blocks=1 00:16:01.119 --rc geninfo_unexecuted_blocks=1 00:16:01.119 00:16:01.119 ' 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:01.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.119 --rc genhtml_branch_coverage=1 00:16:01.119 --rc genhtml_function_coverage=1 00:16:01.119 --rc genhtml_legend=1 00:16:01.119 --rc geninfo_all_blocks=1 00:16:01.119 --rc geninfo_unexecuted_blocks=1 00:16:01.119 00:16:01.119 ' 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:01.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.119 --rc genhtml_branch_coverage=1 00:16:01.119 --rc genhtml_function_coverage=1 00:16:01.119 --rc genhtml_legend=1 00:16:01.119 --rc geninfo_all_blocks=1 00:16:01.119 --rc geninfo_unexecuted_blocks=1 00:16:01.119 00:16:01.119 ' 00:16:01.119 05:26:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:16:01.119 05:26:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58773 00:16:01.119 05:26:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58773 00:16:01.119 05:26:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 58773 ']' 00:16:01.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:01.119 05:26:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:01.119 [2024-11-20 05:26:32.867747] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:01.119 [2024-11-20 05:26:32.867910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58773 ] 00:16:01.378 [2024-11-20 05:26:33.031870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.378 [2024-11-20 05:26:33.150767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:16:02.309 { 00:16:02.309 "version": "SPDK v25.01-pre git sha1 95f6a056e", 00:16:02.309 "fields": { 00:16:02.309 "major": 25, 00:16:02.309 "minor": 1, 00:16:02.309 "patch": 0, 00:16:02.309 "suffix": "-pre", 00:16:02.309 "commit": "95f6a056e" 00:16:02.309 } 00:16:02.309 } 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:16:02.309 05:26:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.309 05:26:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.310 05:26:33 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.310 05:26:33 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:02.310 05:26:33 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:02.568 request: 00:16:02.568 { 00:16:02.568 "method": "env_dpdk_get_mem_stats", 00:16:02.568 "req_id": 1 00:16:02.568 } 00:16:02.568 Got JSON-RPC error response 00:16:02.568 response: 00:16:02.568 { 00:16:02.568 "code": -32601, 00:16:02.568 "message": "Method not found" 00:16:02.568 } 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.568 05:26:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58773 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 58773 ']' 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 58773 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58773 00:16:02.568 killing process with pid 58773 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58773' 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@971 -- # kill 58773 00:16:02.568 05:26:34 app_cmdline -- common/autotest_common.sh@976 -- # wait 58773 00:16:03.942 00:16:03.942 real 0m3.063s 00:16:03.942 user 0m3.285s 00:16:03.942 sys 0m0.492s 00:16:03.942 05:26:35 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:03.942 ************************************ 00:16:03.942 END TEST app_cmdline 00:16:03.942 ************************************ 00:16:03.942 05:26:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:03.942 05:26:35 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:03.942 05:26:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:03.942 05:26:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:03.942 05:26:35 -- common/autotest_common.sh@10 -- # set +x 00:16:03.942 ************************************ 00:16:03.942 START TEST version 00:16:03.942 ************************************ 00:16:03.942 05:26:35 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:04.201 * Looking for test storage... 00:16:04.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1691 -- # lcov --version 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:04.201 05:26:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.201 05:26:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.201 05:26:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.201 05:26:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.201 05:26:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.201 05:26:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.201 05:26:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.201 05:26:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.201 05:26:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.201 05:26:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.201 05:26:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.201 05:26:35 version -- scripts/common.sh@344 -- # case "$op" in 00:16:04.201 05:26:35 version -- scripts/common.sh@345 -- # : 1 00:16:04.201 05:26:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.201 05:26:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.201 05:26:35 version -- scripts/common.sh@365 -- # decimal 1 00:16:04.201 05:26:35 version -- scripts/common.sh@353 -- # local d=1 00:16:04.201 05:26:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.201 05:26:35 version -- scripts/common.sh@355 -- # echo 1 00:16:04.201 05:26:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.201 05:26:35 version -- scripts/common.sh@366 -- # decimal 2 00:16:04.201 05:26:35 version -- scripts/common.sh@353 -- # local d=2 00:16:04.201 05:26:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.201 05:26:35 version -- scripts/common.sh@355 -- # echo 2 00:16:04.201 05:26:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.201 05:26:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.201 05:26:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.201 05:26:35 version -- scripts/common.sh@368 -- # return 0 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:04.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.201 --rc genhtml_branch_coverage=1 00:16:04.201 --rc genhtml_function_coverage=1 00:16:04.201 --rc genhtml_legend=1 00:16:04.201 --rc geninfo_all_blocks=1 00:16:04.201 --rc geninfo_unexecuted_blocks=1 00:16:04.201 00:16:04.201 ' 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:04.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.201 --rc genhtml_branch_coverage=1 00:16:04.201 --rc genhtml_function_coverage=1 00:16:04.201 --rc genhtml_legend=1 00:16:04.201 --rc geninfo_all_blocks=1 00:16:04.201 --rc geninfo_unexecuted_blocks=1 00:16:04.201 00:16:04.201 ' 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:04.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.201 --rc genhtml_branch_coverage=1 00:16:04.201 --rc genhtml_function_coverage=1 00:16:04.201 --rc genhtml_legend=1 00:16:04.201 --rc geninfo_all_blocks=1 00:16:04.201 --rc geninfo_unexecuted_blocks=1 00:16:04.201 00:16:04.201 ' 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:04.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.201 --rc genhtml_branch_coverage=1 00:16:04.201 --rc genhtml_function_coverage=1 00:16:04.201 --rc genhtml_legend=1 00:16:04.201 --rc geninfo_all_blocks=1 00:16:04.201 --rc geninfo_unexecuted_blocks=1 00:16:04.201 00:16:04.201 ' 00:16:04.201 05:26:35 version -- app/version.sh@17 -- # get_header_version major 00:16:04.201 05:26:35 version -- app/version.sh@14 -- # cut -f2 00:16:04.201 05:26:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:04.201 05:26:35 version -- app/version.sh@14 -- # tr -d '"' 00:16:04.201 05:26:35 version -- app/version.sh@17 -- # major=25 00:16:04.201 05:26:35 version -- app/version.sh@18 -- # get_header_version minor 00:16:04.201 05:26:35 version -- app/version.sh@14 -- # cut -f2 00:16:04.201 05:26:35 version -- app/version.sh@14 -- # tr -d '"' 00:16:04.201 05:26:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:04.201 05:26:35 version -- app/version.sh@18 -- # minor=1 00:16:04.201 05:26:35 version -- app/version.sh@19 -- # get_header_version patch 00:16:04.201 05:26:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:04.201 05:26:35 version -- app/version.sh@14 -- # cut -f2 00:16:04.201 05:26:35 version -- app/version.sh@14 -- # tr -d '"' 00:16:04.201 05:26:35 version -- app/version.sh@19 -- # patch=0 00:16:04.201 05:26:35 version -- app/version.sh@20 -- # get_header_version suffix 00:16:04.201 05:26:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:04.201 05:26:35 version -- app/version.sh@14 -- # cut -f2 00:16:04.201 05:26:35 version -- app/version.sh@14 -- # tr -d '"' 00:16:04.201 05:26:35 version -- app/version.sh@20 -- # suffix=-pre 00:16:04.201 05:26:35 version -- app/version.sh@22 -- # version=25.1 00:16:04.201 05:26:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:16:04.201 05:26:35 version -- app/version.sh@28 -- # version=25.1rc0 00:16:04.201 05:26:35 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:04.201 05:26:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:16:04.201 05:26:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:16:04.201 05:26:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:16:04.201 00:16:04.201 real 0m0.189s 00:16:04.201 user 0m0.112s 00:16:04.201 sys 0m0.108s 00:16:04.201 ************************************ 00:16:04.201 END TEST version 00:16:04.201 ************************************ 00:16:04.201 05:26:35 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:04.201 05:26:35 version -- common/autotest_common.sh@10 -- # set +x 00:16:04.201 05:26:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:16:04.201 05:26:35 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:16:04.201 05:26:35 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:04.202 05:26:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:04.202 05:26:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:04.202 05:26:35 -- common/autotest_common.sh@10 -- # set +x 00:16:04.202 ************************************ 00:16:04.202 START TEST bdev_raid 00:16:04.202 ************************************ 00:16:04.202 05:26:35 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:04.202 * Looking for test storage... 00:16:04.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:04.202 05:26:36 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:04.202 05:26:36 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:04.202 05:26:36 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:04.460 05:26:36 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@345 -- # : 1 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.460 05:26:36 bdev_raid -- scripts/common.sh@368 -- # return 0 00:16:04.460 05:26:36 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.460 05:26:36 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:04.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.460 --rc genhtml_branch_coverage=1 00:16:04.460 --rc genhtml_function_coverage=1 00:16:04.460 --rc genhtml_legend=1 00:16:04.460 --rc geninfo_all_blocks=1 00:16:04.460 --rc geninfo_unexecuted_blocks=1 00:16:04.460 00:16:04.460 ' 00:16:04.460 05:26:36 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:04.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.460 --rc genhtml_branch_coverage=1 00:16:04.460 --rc genhtml_function_coverage=1 00:16:04.460 --rc genhtml_legend=1 00:16:04.460 --rc geninfo_all_blocks=1 00:16:04.460 --rc geninfo_unexecuted_blocks=1 00:16:04.460 00:16:04.460 ' 00:16:04.460 05:26:36 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:04.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.460 --rc genhtml_branch_coverage=1 00:16:04.460 --rc genhtml_function_coverage=1 00:16:04.460 --rc genhtml_legend=1 00:16:04.460 --rc geninfo_all_blocks=1 00:16:04.461 --rc geninfo_unexecuted_blocks=1 00:16:04.461 00:16:04.461 ' 00:16:04.461 05:26:36 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:04.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.461 --rc genhtml_branch_coverage=1 00:16:04.461 --rc genhtml_function_coverage=1 00:16:04.461 --rc genhtml_legend=1 00:16:04.461 --rc geninfo_all_blocks=1 00:16:04.461 --rc geninfo_unexecuted_blocks=1 00:16:04.461 00:16:04.461 ' 00:16:04.461 05:26:36 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:04.461 05:26:36 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:16:04.461 05:26:36 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:16:04.461 05:26:36 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:16:04.461 05:26:36 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:16:04.461 05:26:36 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:16:04.461 05:26:36 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:16:04.461 05:26:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:04.461 05:26:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:04.461 05:26:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.461 ************************************ 00:16:04.461 START TEST raid1_resize_data_offset_test 00:16:04.461 ************************************ 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=58944 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 58944' 00:16:04.461 Process raid pid: 58944 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 58944 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 58944 ']' 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.461 05:26:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:04.461 [2024-11-20 05:26:36.167076] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:04.461 [2024-11-20 05:26:36.167481] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.720 [2024-11-20 05:26:36.319390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.720 [2024-11-20 05:26:36.420708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.720 [2024-11-20 05:26:36.542157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.720 [2024-11-20 05:26:36.542279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.371 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:05.371 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:16:05.371 05:26:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:16:05.371 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.371 05:26:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.371 malloc0 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.371 malloc1 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.371 null0 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.371 [2024-11-20 05:26:37.112817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:16:05.371 [2024-11-20 05:26:37.114434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:05.371 [2024-11-20 05:26:37.114477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:16:05.371 [2024-11-20 05:26:37.114598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:05.371 [2024-11-20 05:26:37.114614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:16:05.371 [2024-11-20 05:26:37.114851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:05.371 [2024-11-20 05:26:37.114983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:05.371 [2024-11-20 05:26:37.114997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:16:05.371 [2024-11-20 05:26:37.115122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.371 [2024-11-20 05:26:37.152829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.371 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 malloc2 00:16:05.952 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.952 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:16:05.952 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.952 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 [2024-11-20 05:26:37.487888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:05.953 [2024-11-20 05:26:37.498431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.953 [2024-11-20 05:26:37.500103] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 58944 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 58944 ']' 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 58944 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58944 00:16:05.953 killing process with pid 58944 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58944' 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 58944 00:16:05.953 05:26:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 58944 00:16:05.953 [2024-11-20 05:26:37.558072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.953 [2024-11-20 05:26:37.558313] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:16:05.953 [2024-11-20 05:26:37.558375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.953 [2024-11-20 05:26:37.558390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:16:05.953 [2024-11-20 05:26:37.577258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.953 [2024-11-20 05:26:37.577546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.953 [2024-11-20 05:26:37.577565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:16:06.894 [2024-11-20 05:26:38.492396] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.461 05:26:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:16:07.461 00:16:07.461 real 0m3.001s 00:16:07.461 user 0m2.931s 00:16:07.461 sys 0m0.406s 00:16:07.461 05:26:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:07.461 05:26:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.461 ************************************ 00:16:07.461 END TEST raid1_resize_data_offset_test 00:16:07.461 ************************************ 00:16:07.461 05:26:39 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:16:07.461 05:26:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:07.461 05:26:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:07.461 05:26:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.461 ************************************ 00:16:07.461 START TEST raid0_resize_superblock_test 00:16:07.461 ************************************ 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59011 00:16:07.461 Process raid pid: 59011 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59011' 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59011 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59011 ']' 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:07.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:07.461 05:26:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.461 [2024-11-20 05:26:39.211424] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:07.461 [2024-11-20 05:26:39.211523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.719 [2024-11-20 05:26:39.362848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.719 [2024-11-20 05:26:39.464449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.976 [2024-11-20 05:26:39.586188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.976 [2024-11-20 05:26:39.586236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.234 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:08.234 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:08.234 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:16:08.234 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.234 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.800 malloc0 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.800 [2024-11-20 05:26:40.352835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:16:08.800 [2024-11-20 05:26:40.352911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.800 [2024-11-20 05:26:40.352934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:08.800 [2024-11-20 05:26:40.352944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.800 [2024-11-20 05:26:40.354919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.800 [2024-11-20 05:26:40.354955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:16:08.800 pt0 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.800 0e3359ca-a805-4943-b0b2-e554f7cf1e0b 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.800 e1d3b08a-3d11-4225-ace3-57f0f63795d2 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.800 5240b3c4-da4e-4744-93a7-20dde3994d18 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.800 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.800 [2024-11-20 05:26:40.456120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e1d3b08a-3d11-4225-ace3-57f0f63795d2 is claimed 00:16:08.800 [2024-11-20 05:26:40.456195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5240b3c4-da4e-4744-93a7-20dde3994d18 is claimed 00:16:08.800 [2024-11-20 05:26:40.456301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:08.800 [2024-11-20 05:26:40.456315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:16:08.800 [2024-11-20 05:26:40.456542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:08.800 [2024-11-20 05:26:40.456690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:08.800 [2024-11-20 05:26:40.456702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:16:08.800 [2024-11-20 05:26:40.456821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.801 [2024-11-20 05:26:40.532332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.801 [2024-11-20 05:26:40.552298] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:08.801 [2024-11-20 05:26:40.552324] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e1d3b08a-3d11-4225-ace3-57f0f63795d2' was resized: old size 131072, new size 204800 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.801 [2024-11-20 05:26:40.560214] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:08.801 [2024-11-20 05:26:40.560234] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5240b3c4-da4e-4744-93a7-20dde3994d18' was resized: old size 131072, new size 204800 00:16:08.801 [2024-11-20 05:26:40.560255] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.801 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.801 [2024-11-20 05:26:40.628322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.060 [2024-11-20 05:26:40.652149] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:16:09.060 [2024-11-20 05:26:40.652212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:16:09.060 [2024-11-20 05:26:40.652225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.060 [2024-11-20 05:26:40.652239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:16:09.060 [2024-11-20 05:26:40.652338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.060 [2024-11-20 05:26:40.652380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.060 [2024-11-20 05:26:40.652390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.060 [2024-11-20 05:26:40.660092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:16:09.060 [2024-11-20 05:26:40.660139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.060 [2024-11-20 05:26:40.660156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:09.060 [2024-11-20 05:26:40.660167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.060 [2024-11-20 05:26:40.662078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.060 [2024-11-20 05:26:40.662110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:16:09.060 [2024-11-20 05:26:40.663453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e1d3b08a-3d11-4225-ace3-57f0f63795d2 00:16:09.060 [2024-11-20 05:26:40.663505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e1d3b08a-3d11-4225-ace3-57f0f63795d2 is claimed 00:16:09.060 [2024-11-20 05:26:40.663585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5240b3c4-da4e-4744-93a7-20dde3994d18 00:16:09.060 [2024-11-20 05:26:40.663600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5240b3c4-da4e-4744-93a7-20dde3994d18 is claimed 00:16:09.060 [2024-11-20 05:26:40.663695] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 5240b3c4-da4e-4744-93a7-20dde3994d18 (2) smaller than existing raid bdev Raid (3) 00:16:09.060 [2024-11-20 05:26:40.663714] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev e1d3b08a-3d11-4225-ace3-57f0f63795d2: File exists 00:16:09.060 [2024-11-20 05:26:40.663746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:09.060 [2024-11-20 05:26:40.663755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:16:09.060 [2024-11-20 05:26:40.663962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:09.060 [2024-11-20 05:26:40.664079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:09.060 [2024-11-20 05:26:40.664085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:16:09.060 [2024-11-20 05:26:40.664232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.060 pt0 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:16:09.060 [2024-11-20 05:26:40.676394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59011 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59011 ']' 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59011 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59011 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:09.060 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:09.061 killing process with pid 59011 00:16:09.061 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59011' 00:16:09.061 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59011 00:16:09.061 [2024-11-20 05:26:40.729518] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.061 05:26:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59011 00:16:09.061 [2024-11-20 05:26:40.729594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.061 [2024-11-20 05:26:40.729638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.061 [2024-11-20 05:26:40.729646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:16:09.994 [2024-11-20 05:26:41.571136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.561 05:26:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:16:10.561 00:16:10.561 real 0m3.168s 00:16:10.561 user 0m3.269s 00:16:10.561 sys 0m0.431s 00:16:10.561 ************************************ 00:16:10.561 END TEST raid0_resize_superblock_test 00:16:10.561 ************************************ 00:16:10.561 05:26:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:10.561 05:26:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.561 05:26:42 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:16:10.561 05:26:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:10.561 05:26:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:10.561 05:26:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.561 ************************************ 00:16:10.561 START TEST raid1_resize_superblock_test 00:16:10.561 ************************************ 00:16:10.561 05:26:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:16:10.561 05:26:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:16:10.561 05:26:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59093 00:16:10.561 Process raid pid: 59093 00:16:10.561 05:26:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59093' 00:16:10.561 05:26:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59093 00:16:10.561 05:26:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59093 ']' 00:16:10.562 05:26:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.562 05:26:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:10.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.562 05:26:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.562 05:26:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:10.562 05:26:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.562 05:26:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:10.821 [2024-11-20 05:26:42.435797] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:10.821 [2024-11-20 05:26:42.435937] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.821 [2024-11-20 05:26:42.595383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.080 [2024-11-20 05:26:42.713109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.080 [2024-11-20 05:26:42.861527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.080 [2024-11-20 05:26:42.861577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.645 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:11.645 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:11.645 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:16:11.645 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.645 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.903 malloc0 00:16:11.903 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.903 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:16:11.903 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.903 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.903 [2024-11-20 05:26:43.684485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:16:11.903 [2024-11-20 05:26:43.684570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.903 [2024-11-20 05:26:43.684595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:11.903 [2024-11-20 05:26:43.684608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.903 [2024-11-20 05:26:43.686941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.903 [2024-11-20 05:26:43.686984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:16:11.903 pt0 00:16:11.903 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.903 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:16:11.903 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.903 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 f1c822e4-8ce2-4cd2-bd1f-80bf10ee6982 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 ce1c9b02-7cd5-40c3-9ef8-a17c568eb8be 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 1d913fd5-b88a-4b0d-8445-8d10d3b49465 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 [2024-11-20 05:26:43.791532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ce1c9b02-7cd5-40c3-9ef8-a17c568eb8be is claimed 00:16:12.162 [2024-11-20 05:26:43.791631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1d913fd5-b88a-4b0d-8445-8d10d3b49465 is claimed 00:16:12.162 [2024-11-20 05:26:43.791771] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:12.162 [2024-11-20 05:26:43.791786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:16:12.162 [2024-11-20 05:26:43.792060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:12.162 [2024-11-20 05:26:43.792245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:12.162 [2024-11-20 05:26:43.792255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:16:12.162 [2024-11-20 05:26:43.792418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 [2024-11-20 05:26:43.875836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 [2024-11-20 05:26:43.907772] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:12.162 [2024-11-20 05:26:43.907809] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ce1c9b02-7cd5-40c3-9ef8-a17c568eb8be' was resized: old size 131072, new size 204800 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 [2024-11-20 05:26:43.915660] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:12.162 [2024-11-20 05:26:43.915687] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1d913fd5-b88a-4b0d-8445-8d10d3b49465' was resized: old size 131072, new size 204800 00:16:12.162 [2024-11-20 05:26:43.915711] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:16:12.162 [2024-11-20 05:26:43.983858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.162 05:26:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.421 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:16:12.421 05:26:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:16:12.421 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:16:12.421 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:16:12.421 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.421 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.421 [2024-11-20 05:26:44.015617] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:16:12.421 [2024-11-20 05:26:44.015712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:16:12.421 [2024-11-20 05:26:44.015741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:16:12.421 [2024-11-20 05:26:44.015932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.421 [2024-11-20 05:26:44.016141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.421 [2024-11-20 05:26:44.016205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.421 [2024-11-20 05:26:44.016218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:16:12.421 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.421 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:16:12.421 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.421 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.421 [2024-11-20 05:26:44.023526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:16:12.421 [2024-11-20 05:26:44.023594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.421 [2024-11-20 05:26:44.023615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:12.421 [2024-11-20 05:26:44.023631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.421 [2024-11-20 05:26:44.025984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.421 [2024-11-20 05:26:44.026026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:16:12.421 [2024-11-20 05:26:44.027724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ce1c9b02-7cd5-40c3-9ef8-a17c568eb8be 00:16:12.421 [2024-11-20 05:26:44.027794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ce1c9b02-7cd5-40c3-9ef8-a17c568eb8be is claimed 00:16:12.421 [2024-11-20 05:26:44.027921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1d913fd5-b88a-4b0d-8445-8d10d3b49465 00:16:12.421 [2024-11-20 05:26:44.027941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1d913fd5-b88a-4b0d-8445-8d10d3b49465 is claimed 00:16:12.421 [2024-11-20 05:26:44.028106] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 1d913fd5-b88a-4b0d-8445-8d10d3b49465 (2) smaller than existing raid bdev Raid (3) 00:16:12.421 [2024-11-20 05:26:44.028127] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ce1c9b02-7cd5-40c3-9ef8-a17c568eb8be: File exists 00:16:12.421 pt0 00:16:12.421 [2024-11-20 05:26:44.028165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:12.421 [2024-11-20 05:26:44.028176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:12.421 [2024-11-20 05:26:44.028455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:12.421 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.421 [2024-11-20 05:26:44.028617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:12.422 [2024-11-20 05:26:44.028626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:16:12.422 [2024-11-20 05:26:44.028777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.422 [2024-11-20 05:26:44.044107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59093 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59093 ']' 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59093 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59093 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:12.422 killing process with pid 59093 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59093' 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59093 00:16:12.422 [2024-11-20 05:26:44.088378] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.422 05:26:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59093 00:16:12.422 [2024-11-20 05:26:44.088469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.422 [2024-11-20 05:26:44.088527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.422 [2024-11-20 05:26:44.088536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:16:13.356 [2024-11-20 05:26:45.011668] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.289 05:26:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:16:14.289 00:16:14.289 real 0m3.391s 00:16:14.289 user 0m3.523s 00:16:14.289 sys 0m0.470s 00:16:14.289 ************************************ 00:16:14.289 END TEST raid1_resize_superblock_test 00:16:14.289 ************************************ 00:16:14.289 05:26:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.289 05:26:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.289 05:26:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:16:14.289 05:26:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:16:14.289 05:26:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:16:14.289 05:26:45 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:16:14.289 05:26:45 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:16:14.289 05:26:45 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:16:14.289 05:26:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:14.289 05:26:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.289 05:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.289 ************************************ 00:16:14.289 START TEST raid_function_test_raid0 00:16:14.289 ************************************ 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=59190 00:16:14.289 Process raid pid: 59190 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59190' 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 59190 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 59190 ']' 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:14.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:14.289 05:26:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:14.289 [2024-11-20 05:26:45.879147] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:14.289 [2024-11-20 05:26:45.879264] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.289 [2024-11-20 05:26:46.033007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.546 [2024-11-20 05:26:46.151006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.546 [2024-11-20 05:26:46.302754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.546 [2024-11-20 05:26:46.302811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:15.212 Base_1 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:15.212 Base_2 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:15.212 [2024-11-20 05:26:46.808259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:15.212 [2024-11-20 05:26:46.810225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:15.212 [2024-11-20 05:26:46.810300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:15.212 [2024-11-20 05:26:46.810313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:15.212 [2024-11-20 05:26:46.810598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:15.212 [2024-11-20 05:26:46.810734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:15.212 [2024-11-20 05:26:46.810743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:16:15.212 [2024-11-20 05:26:46.810883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.212 05:26:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:16:15.472 [2024-11-20 05:26:47.028393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:15.472 /dev/nbd0 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.472 1+0 records in 00:16:15.472 1+0 records out 00:16:15.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028554 s, 14.3 MB/s 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:15.472 { 00:16:15.472 "nbd_device": "/dev/nbd0", 00:16:15.472 "bdev_name": "raid" 00:16:15.472 } 00:16:15.472 ]' 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:15.472 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:15.472 { 00:16:15.472 "nbd_device": "/dev/nbd0", 00:16:15.472 "bdev_name": "raid" 00:16:15.472 } 00:16:15.472 ]' 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:16:15.773 4096+0 records in 00:16:15.773 4096+0 records out 00:16:15.773 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0191869 s, 109 MB/s 00:16:15.773 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:16.032 4096+0 records in 00:16:16.032 4096+0 records out 00:16:16.032 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.262374 s, 8.0 MB/s 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:16.032 128+0 records in 00:16:16.032 128+0 records out 00:16:16.032 65536 bytes (66 kB, 64 KiB) copied, 0.000490277 s, 134 MB/s 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:16.032 2035+0 records in 00:16:16.032 2035+0 records out 00:16:16.032 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00643131 s, 162 MB/s 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:16:16.032 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:16.032 456+0 records in 00:16:16.032 456+0 records out 00:16:16.033 233472 bytes (233 kB, 228 KiB) copied, 0.00197389 s, 118 MB/s 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.033 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:16.289 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.290 [2024-11-20 05:26:47.914059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.290 05:26:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 59190 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 59190 ']' 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 59190 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59190 00:16:16.546 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:16.547 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:16.547 killing process with pid 59190 00:16:16.547 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59190' 00:16:16.547 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 59190 00:16:16.547 [2024-11-20 05:26:48.186746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.547 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 59190 00:16:16.547 [2024-11-20 05:26:48.186858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.547 [2024-11-20 05:26:48.186905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.547 [2024-11-20 05:26:48.186918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:16:16.547 [2024-11-20 05:26:48.295105] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.111 05:26:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:16:17.111 00:16:17.111 real 0m3.075s 00:16:17.111 user 0m3.709s 00:16:17.111 sys 0m0.747s 00:16:17.111 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:17.111 05:26:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:17.111 ************************************ 00:16:17.111 END TEST raid_function_test_raid0 00:16:17.111 ************************************ 00:16:17.111 05:26:48 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:16:17.111 05:26:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:17.111 05:26:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:17.111 05:26:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.111 ************************************ 00:16:17.111 START TEST raid_function_test_concat 00:16:17.111 ************************************ 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=59308 00:16:17.111 Process raid pid: 59308 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59308' 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 59308 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 59308 ']' 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:17.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:17.111 05:26:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:17.369 [2024-11-20 05:26:48.993961] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:17.369 [2024-11-20 05:26:48.994478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.369 [2024-11-20 05:26:49.150917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.626 [2024-11-20 05:26:49.251863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.626 [2024-11-20 05:26:49.373598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.626 [2024-11-20 05:26:49.373647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:18.192 Base_1 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:18.192 Base_2 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:18.192 [2024-11-20 05:26:49.815585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:18.192 [2024-11-20 05:26:49.817231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:18.192 [2024-11-20 05:26:49.817293] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:18.192 [2024-11-20 05:26:49.817303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:18.192 [2024-11-20 05:26:49.817535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:18.192 [2024-11-20 05:26:49.817646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:18.192 [2024-11-20 05:26:49.817654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:16:18.192 [2024-11-20 05:26:49.817759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:16:18.192 05:26:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.193 05:26:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:16:18.450 [2024-11-20 05:26:50.047745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:18.450 /dev/nbd0 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.450 1+0 records in 00:16:18.450 1+0 records out 00:16:18.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313709 s, 13.1 MB/s 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.450 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:18.451 05:26:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:16:18.451 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.451 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.451 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:16:18.451 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.451 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:18.708 { 00:16:18.708 "nbd_device": "/dev/nbd0", 00:16:18.708 "bdev_name": "raid" 00:16:18.708 } 00:16:18.708 ]' 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:18.708 { 00:16:18.708 "nbd_device": "/dev/nbd0", 00:16:18.708 "bdev_name": "raid" 00:16:18.708 } 00:16:18.708 ]' 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:16:18.708 4096+0 records in 00:16:18.708 4096+0 records out 00:16:18.708 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0291725 s, 71.9 MB/s 00:16:18.708 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:18.966 4096+0 records in 00:16:18.966 4096+0 records out 00:16:18.966 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.259623 s, 8.1 MB/s 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:18.966 128+0 records in 00:16:18.966 128+0 records out 00:16:18.966 65536 bytes (66 kB, 64 KiB) copied, 0.000831772 s, 78.8 MB/s 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:18.966 2035+0 records in 00:16:18.966 2035+0 records out 00:16:18.966 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00670247 s, 155 MB/s 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:18.966 456+0 records in 00:16:18.966 456+0 records out 00:16:18.966 233472 bytes (233 kB, 228 KiB) copied, 0.00252648 s, 92.4 MB/s 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.966 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:19.224 [2024-11-20 05:26:50.926537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.224 05:26:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:16:19.481 05:26:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:19.481 05:26:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:19.481 05:26:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:19.481 05:26:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 59308 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 59308 ']' 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 59308 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59308 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:19.482 killing process with pid 59308 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59308' 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 59308 00:16:19.482 [2024-11-20 05:26:51.218198] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.482 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 59308 00:16:19.482 [2024-11-20 05:26:51.218315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.482 [2024-11-20 05:26:51.218381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.482 [2024-11-20 05:26:51.218394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:16:19.740 [2024-11-20 05:26:51.327762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.308 05:26:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:16:20.308 00:16:20.308 real 0m2.996s 00:16:20.308 user 0m3.567s 00:16:20.308 sys 0m0.767s 00:16:20.308 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:20.308 05:26:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:20.308 ************************************ 00:16:20.308 END TEST raid_function_test_concat 00:16:20.308 ************************************ 00:16:20.308 05:26:51 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:16:20.308 05:26:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:20.308 05:26:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:20.308 05:26:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.308 ************************************ 00:16:20.308 START TEST raid0_resize_test 00:16:20.308 ************************************ 00:16:20.308 05:26:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:16:20.308 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:16:20.308 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:16:20.308 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:16:20.308 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59425 00:16:20.309 Process raid pid: 59425 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59425' 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59425 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 59425 ']' 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.309 05:26:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:20.309 [2024-11-20 05:26:52.036326] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:20.309 [2024-11-20 05:26:52.036462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.567 [2024-11-20 05:26:52.193860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.567 [2024-11-20 05:26:52.292737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.825 [2024-11-20 05:26:52.412963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.825 [2024-11-20 05:26:52.413000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.392 05:26:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:21.392 05:26:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:16:21.392 05:26:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:16:21.392 05:26:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.392 05:26:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 Base_1 00:16:21.392 05:26:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.392 05:26:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:16:21.392 05:26:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.392 05:26:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 Base_2 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 [2024-11-20 05:26:53.012049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:21.392 [2024-11-20 05:26:53.013622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:21.392 [2024-11-20 05:26:53.013668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:21.392 [2024-11-20 05:26:53.013678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:21.392 [2024-11-20 05:26:53.013882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:21.392 [2024-11-20 05:26:53.013972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:21.392 [2024-11-20 05:26:53.013979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:16:21.392 [2024-11-20 05:26:53.014081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 [2024-11-20 05:26:53.020012] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:21.392 [2024-11-20 05:26:53.020036] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:16:21.392 true 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 [2024-11-20 05:26:53.032171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 [2024-11-20 05:26:53.064009] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:21.392 [2024-11-20 05:26:53.064028] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:16:21.392 [2024-11-20 05:26:53.064047] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:16:21.392 true 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 [2024-11-20 05:26:53.076179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59425 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 59425 ']' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 59425 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59425 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:21.392 killing process with pid 59425 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59425' 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 59425 00:16:21.392 [2024-11-20 05:26:53.129493] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.392 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 59425 00:16:21.392 [2024-11-20 05:26:53.129573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.392 [2024-11-20 05:26:53.129619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.392 [2024-11-20 05:26:53.129627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:16:21.392 [2024-11-20 05:26:53.138905] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.959 05:26:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:16:21.959 00:16:21.959 real 0m1.777s 00:16:21.959 user 0m2.008s 00:16:21.959 sys 0m0.287s 00:16:21.959 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:21.959 05:26:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.959 ************************************ 00:16:21.959 END TEST raid0_resize_test 00:16:21.959 ************************************ 00:16:21.960 05:26:53 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:16:21.960 05:26:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:21.960 05:26:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:21.960 05:26:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.960 ************************************ 00:16:21.960 START TEST raid1_resize_test 00:16:21.960 ************************************ 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59470 00:16:21.960 Process raid pid: 59470 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59470' 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59470 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 59470 ']' 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:21.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.960 05:26:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.218 05:26:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:22.218 05:26:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.218 [2024-11-20 05:26:53.853788] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:22.218 [2024-11-20 05:26:53.853906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.218 [2024-11-20 05:26:54.014266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.476 [2024-11-20 05:26:54.129190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.476 [2024-11-20 05:26:54.269765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.476 [2024-11-20 05:26:54.269807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.042 Base_1 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.042 Base_2 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.042 [2024-11-20 05:26:54.724329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:23.042 [2024-11-20 05:26:54.726288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:23.042 [2024-11-20 05:26:54.726348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:23.042 [2024-11-20 05:26:54.726361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:23.042 [2024-11-20 05:26:54.726641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:23.042 [2024-11-20 05:26:54.726761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:23.042 [2024-11-20 05:26:54.726770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:16:23.042 [2024-11-20 05:26:54.726907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.042 [2024-11-20 05:26:54.732300] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:23.042 [2024-11-20 05:26:54.732328] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:16:23.042 true 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:16:23.042 [2024-11-20 05:26:54.744499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.042 [2024-11-20 05:26:54.772296] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:23.042 [2024-11-20 05:26:54.772318] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:16:23.042 [2024-11-20 05:26:54.772344] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:16:23.042 true 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.042 [2024-11-20 05:26:54.784507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59470 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 59470 ']' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 59470 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59470 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.042 killing process with pid 59470 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59470' 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 59470 00:16:23.042 05:26:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 59470 00:16:23.042 [2024-11-20 05:26:54.829040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:23.042 [2024-11-20 05:26:54.829131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.042 [2024-11-20 05:26:54.829611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.042 [2024-11-20 05:26:54.829632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:16:23.042 [2024-11-20 05:26:54.840672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.975 05:26:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:16:23.975 00:16:23.975 real 0m1.796s 00:16:23.975 user 0m1.921s 00:16:23.975 sys 0m0.268s 00:16:23.975 05:26:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:23.975 05:26:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.975 ************************************ 00:16:23.975 END TEST raid1_resize_test 00:16:23.975 ************************************ 00:16:23.975 05:26:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:16:23.975 05:26:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:23.976 05:26:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:16:23.976 05:26:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:23.976 05:26:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:23.976 05:26:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.976 ************************************ 00:16:23.976 START TEST raid_state_function_test 00:16:23.976 ************************************ 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=59527 00:16:23.976 Process raid pid: 59527 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59527' 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 59527 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 59527 ']' 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:23.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.976 05:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:23.976 [2024-11-20 05:26:55.693510] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:23.976 [2024-11-20 05:26:55.693635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.234 [2024-11-20 05:26:55.856292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.234 [2024-11-20 05:26:55.972832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.491 [2024-11-20 05:26:56.122128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.491 [2024-11-20 05:26:56.122181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.749 [2024-11-20 05:26:56.573689] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.749 [2024-11-20 05:26:56.573754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.749 [2024-11-20 05:26:56.573764] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.749 [2024-11-20 05:26:56.573775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.749 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.007 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.007 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.007 "name": "Existed_Raid", 00:16:25.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.007 "strip_size_kb": 64, 00:16:25.007 "state": "configuring", 00:16:25.007 "raid_level": "raid0", 00:16:25.007 "superblock": false, 00:16:25.007 "num_base_bdevs": 2, 00:16:25.007 "num_base_bdevs_discovered": 0, 00:16:25.007 "num_base_bdevs_operational": 2, 00:16:25.007 "base_bdevs_list": [ 00:16:25.007 { 00:16:25.007 "name": "BaseBdev1", 00:16:25.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.007 "is_configured": false, 00:16:25.007 "data_offset": 0, 00:16:25.007 "data_size": 0 00:16:25.007 }, 00:16:25.007 { 00:16:25.007 "name": "BaseBdev2", 00:16:25.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.007 "is_configured": false, 00:16:25.007 "data_offset": 0, 00:16:25.007 "data_size": 0 00:16:25.007 } 00:16:25.007 ] 00:16:25.007 }' 00:16:25.007 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.007 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.266 [2024-11-20 05:26:56.873712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.266 [2024-11-20 05:26:56.873751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.266 [2024-11-20 05:26:56.881702] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.266 [2024-11-20 05:26:56.881740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.266 [2024-11-20 05:26:56.881749] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.266 [2024-11-20 05:26:56.881761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.266 [2024-11-20 05:26:56.917286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.266 BaseBdev1 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.266 [ 00:16:25.266 { 00:16:25.266 "name": "BaseBdev1", 00:16:25.266 "aliases": [ 00:16:25.266 "09cfce67-714b-4c23-930e-3de1f3bc726a" 00:16:25.266 ], 00:16:25.266 "product_name": "Malloc disk", 00:16:25.266 "block_size": 512, 00:16:25.266 "num_blocks": 65536, 00:16:25.266 "uuid": "09cfce67-714b-4c23-930e-3de1f3bc726a", 00:16:25.266 "assigned_rate_limits": { 00:16:25.266 "rw_ios_per_sec": 0, 00:16:25.266 "rw_mbytes_per_sec": 0, 00:16:25.266 "r_mbytes_per_sec": 0, 00:16:25.266 "w_mbytes_per_sec": 0 00:16:25.266 }, 00:16:25.266 "claimed": true, 00:16:25.266 "claim_type": "exclusive_write", 00:16:25.266 "zoned": false, 00:16:25.266 "supported_io_types": { 00:16:25.266 "read": true, 00:16:25.266 "write": true, 00:16:25.266 "unmap": true, 00:16:25.266 "flush": true, 00:16:25.266 "reset": true, 00:16:25.266 "nvme_admin": false, 00:16:25.266 "nvme_io": false, 00:16:25.266 "nvme_io_md": false, 00:16:25.266 "write_zeroes": true, 00:16:25.266 "zcopy": true, 00:16:25.266 "get_zone_info": false, 00:16:25.266 "zone_management": false, 00:16:25.266 "zone_append": false, 00:16:25.266 "compare": false, 00:16:25.266 "compare_and_write": false, 00:16:25.266 "abort": true, 00:16:25.266 "seek_hole": false, 00:16:25.266 "seek_data": false, 00:16:25.266 "copy": true, 00:16:25.266 "nvme_iov_md": false 00:16:25.266 }, 00:16:25.266 "memory_domains": [ 00:16:25.266 { 00:16:25.266 "dma_device_id": "system", 00:16:25.266 "dma_device_type": 1 00:16:25.266 }, 00:16:25.266 { 00:16:25.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.266 "dma_device_type": 2 00:16:25.266 } 00:16:25.266 ], 00:16:25.266 "driver_specific": {} 00:16:25.266 } 00:16:25.266 ] 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.266 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.266 "name": "Existed_Raid", 00:16:25.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.266 "strip_size_kb": 64, 00:16:25.266 "state": "configuring", 00:16:25.266 "raid_level": "raid0", 00:16:25.267 "superblock": false, 00:16:25.267 "num_base_bdevs": 2, 00:16:25.267 "num_base_bdevs_discovered": 1, 00:16:25.267 "num_base_bdevs_operational": 2, 00:16:25.267 "base_bdevs_list": [ 00:16:25.267 { 00:16:25.267 "name": "BaseBdev1", 00:16:25.267 "uuid": "09cfce67-714b-4c23-930e-3de1f3bc726a", 00:16:25.267 "is_configured": true, 00:16:25.267 "data_offset": 0, 00:16:25.267 "data_size": 65536 00:16:25.267 }, 00:16:25.267 { 00:16:25.267 "name": "BaseBdev2", 00:16:25.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.267 "is_configured": false, 00:16:25.267 "data_offset": 0, 00:16:25.267 "data_size": 0 00:16:25.267 } 00:16:25.267 ] 00:16:25.267 }' 00:16:25.267 05:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.267 05:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.526 [2024-11-20 05:26:57.265428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.526 [2024-11-20 05:26:57.265487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.526 [2024-11-20 05:26:57.273452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.526 [2024-11-20 05:26:57.275441] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.526 [2024-11-20 05:26:57.275483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.526 "name": "Existed_Raid", 00:16:25.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.526 "strip_size_kb": 64, 00:16:25.526 "state": "configuring", 00:16:25.526 "raid_level": "raid0", 00:16:25.526 "superblock": false, 00:16:25.526 "num_base_bdevs": 2, 00:16:25.526 "num_base_bdevs_discovered": 1, 00:16:25.526 "num_base_bdevs_operational": 2, 00:16:25.526 "base_bdevs_list": [ 00:16:25.526 { 00:16:25.526 "name": "BaseBdev1", 00:16:25.526 "uuid": "09cfce67-714b-4c23-930e-3de1f3bc726a", 00:16:25.526 "is_configured": true, 00:16:25.526 "data_offset": 0, 00:16:25.526 "data_size": 65536 00:16:25.526 }, 00:16:25.526 { 00:16:25.526 "name": "BaseBdev2", 00:16:25.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.526 "is_configured": false, 00:16:25.526 "data_offset": 0, 00:16:25.526 "data_size": 0 00:16:25.526 } 00:16:25.526 ] 00:16:25.526 }' 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.526 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.783 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:25.783 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.784 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.042 [2024-11-20 05:26:57.634129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.042 [2024-11-20 05:26:57.634172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:26.042 [2024-11-20 05:26:57.634179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:26.042 [2024-11-20 05:26:57.634423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:26.042 BaseBdev2 00:16:26.042 [2024-11-20 05:26:57.634635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:26.042 [2024-11-20 05:26:57.634646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:26.042 [2024-11-20 05:26:57.634862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.042 [ 00:16:26.042 { 00:16:26.042 "name": "BaseBdev2", 00:16:26.042 "aliases": [ 00:16:26.042 "d4ab2c6f-b0ef-48ec-b4aa-8d67eac33071" 00:16:26.042 ], 00:16:26.042 "product_name": "Malloc disk", 00:16:26.042 "block_size": 512, 00:16:26.042 "num_blocks": 65536, 00:16:26.042 "uuid": "d4ab2c6f-b0ef-48ec-b4aa-8d67eac33071", 00:16:26.042 "assigned_rate_limits": { 00:16:26.042 "rw_ios_per_sec": 0, 00:16:26.042 "rw_mbytes_per_sec": 0, 00:16:26.042 "r_mbytes_per_sec": 0, 00:16:26.042 "w_mbytes_per_sec": 0 00:16:26.042 }, 00:16:26.042 "claimed": true, 00:16:26.042 "claim_type": "exclusive_write", 00:16:26.042 "zoned": false, 00:16:26.042 "supported_io_types": { 00:16:26.042 "read": true, 00:16:26.042 "write": true, 00:16:26.042 "unmap": true, 00:16:26.042 "flush": true, 00:16:26.042 "reset": true, 00:16:26.042 "nvme_admin": false, 00:16:26.042 "nvme_io": false, 00:16:26.042 "nvme_io_md": false, 00:16:26.042 "write_zeroes": true, 00:16:26.042 "zcopy": true, 00:16:26.042 "get_zone_info": false, 00:16:26.042 "zone_management": false, 00:16:26.042 "zone_append": false, 00:16:26.042 "compare": false, 00:16:26.042 "compare_and_write": false, 00:16:26.042 "abort": true, 00:16:26.042 "seek_hole": false, 00:16:26.042 "seek_data": false, 00:16:26.042 "copy": true, 00:16:26.042 "nvme_iov_md": false 00:16:26.042 }, 00:16:26.042 "memory_domains": [ 00:16:26.042 { 00:16:26.042 "dma_device_id": "system", 00:16:26.042 "dma_device_type": 1 00:16:26.042 }, 00:16:26.042 { 00:16:26.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.042 "dma_device_type": 2 00:16:26.042 } 00:16:26.042 ], 00:16:26.042 "driver_specific": {} 00:16:26.042 } 00:16:26.042 ] 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.042 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.043 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.043 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.043 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.043 "name": "Existed_Raid", 00:16:26.043 "uuid": "d1db792d-730b-4039-a681-9d2afb39856b", 00:16:26.043 "strip_size_kb": 64, 00:16:26.043 "state": "online", 00:16:26.043 "raid_level": "raid0", 00:16:26.043 "superblock": false, 00:16:26.043 "num_base_bdevs": 2, 00:16:26.043 "num_base_bdevs_discovered": 2, 00:16:26.043 "num_base_bdevs_operational": 2, 00:16:26.043 "base_bdevs_list": [ 00:16:26.043 { 00:16:26.043 "name": "BaseBdev1", 00:16:26.043 "uuid": "09cfce67-714b-4c23-930e-3de1f3bc726a", 00:16:26.043 "is_configured": true, 00:16:26.043 "data_offset": 0, 00:16:26.043 "data_size": 65536 00:16:26.043 }, 00:16:26.043 { 00:16:26.043 "name": "BaseBdev2", 00:16:26.043 "uuid": "d4ab2c6f-b0ef-48ec-b4aa-8d67eac33071", 00:16:26.043 "is_configured": true, 00:16:26.043 "data_offset": 0, 00:16:26.043 "data_size": 65536 00:16:26.043 } 00:16:26.043 ] 00:16:26.043 }' 00:16:26.043 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.043 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.302 [2024-11-20 05:26:57.982516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.302 05:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.302 "name": "Existed_Raid", 00:16:26.302 "aliases": [ 00:16:26.302 "d1db792d-730b-4039-a681-9d2afb39856b" 00:16:26.302 ], 00:16:26.302 "product_name": "Raid Volume", 00:16:26.302 "block_size": 512, 00:16:26.302 "num_blocks": 131072, 00:16:26.302 "uuid": "d1db792d-730b-4039-a681-9d2afb39856b", 00:16:26.302 "assigned_rate_limits": { 00:16:26.302 "rw_ios_per_sec": 0, 00:16:26.302 "rw_mbytes_per_sec": 0, 00:16:26.302 "r_mbytes_per_sec": 0, 00:16:26.302 "w_mbytes_per_sec": 0 00:16:26.302 }, 00:16:26.302 "claimed": false, 00:16:26.302 "zoned": false, 00:16:26.302 "supported_io_types": { 00:16:26.302 "read": true, 00:16:26.302 "write": true, 00:16:26.302 "unmap": true, 00:16:26.302 "flush": true, 00:16:26.302 "reset": true, 00:16:26.302 "nvme_admin": false, 00:16:26.302 "nvme_io": false, 00:16:26.302 "nvme_io_md": false, 00:16:26.302 "write_zeroes": true, 00:16:26.302 "zcopy": false, 00:16:26.302 "get_zone_info": false, 00:16:26.302 "zone_management": false, 00:16:26.302 "zone_append": false, 00:16:26.302 "compare": false, 00:16:26.302 "compare_and_write": false, 00:16:26.302 "abort": false, 00:16:26.302 "seek_hole": false, 00:16:26.302 "seek_data": false, 00:16:26.302 "copy": false, 00:16:26.302 "nvme_iov_md": false 00:16:26.302 }, 00:16:26.302 "memory_domains": [ 00:16:26.302 { 00:16:26.302 "dma_device_id": "system", 00:16:26.302 "dma_device_type": 1 00:16:26.302 }, 00:16:26.302 { 00:16:26.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.302 "dma_device_type": 2 00:16:26.302 }, 00:16:26.302 { 00:16:26.302 "dma_device_id": "system", 00:16:26.302 "dma_device_type": 1 00:16:26.302 }, 00:16:26.302 { 00:16:26.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.302 "dma_device_type": 2 00:16:26.302 } 00:16:26.302 ], 00:16:26.302 "driver_specific": { 00:16:26.302 "raid": { 00:16:26.302 "uuid": "d1db792d-730b-4039-a681-9d2afb39856b", 00:16:26.302 "strip_size_kb": 64, 00:16:26.302 "state": "online", 00:16:26.302 "raid_level": "raid0", 00:16:26.302 "superblock": false, 00:16:26.302 "num_base_bdevs": 2, 00:16:26.302 "num_base_bdevs_discovered": 2, 00:16:26.302 "num_base_bdevs_operational": 2, 00:16:26.302 "base_bdevs_list": [ 00:16:26.302 { 00:16:26.302 "name": "BaseBdev1", 00:16:26.302 "uuid": "09cfce67-714b-4c23-930e-3de1f3bc726a", 00:16:26.302 "is_configured": true, 00:16:26.302 "data_offset": 0, 00:16:26.302 "data_size": 65536 00:16:26.302 }, 00:16:26.302 { 00:16:26.302 "name": "BaseBdev2", 00:16:26.302 "uuid": "d4ab2c6f-b0ef-48ec-b4aa-8d67eac33071", 00:16:26.302 "is_configured": true, 00:16:26.302 "data_offset": 0, 00:16:26.302 "data_size": 65536 00:16:26.302 } 00:16:26.302 ] 00:16:26.302 } 00:16:26.302 } 00:16:26.302 }' 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:26.302 BaseBdev2' 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.302 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.561 [2024-11-20 05:26:58.146314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.561 [2024-11-20 05:26:58.146347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.561 [2024-11-20 05:26:58.146406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.561 "name": "Existed_Raid", 00:16:26.561 "uuid": "d1db792d-730b-4039-a681-9d2afb39856b", 00:16:26.561 "strip_size_kb": 64, 00:16:26.561 "state": "offline", 00:16:26.561 "raid_level": "raid0", 00:16:26.561 "superblock": false, 00:16:26.561 "num_base_bdevs": 2, 00:16:26.561 "num_base_bdevs_discovered": 1, 00:16:26.561 "num_base_bdevs_operational": 1, 00:16:26.561 "base_bdevs_list": [ 00:16:26.561 { 00:16:26.561 "name": null, 00:16:26.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.561 "is_configured": false, 00:16:26.561 "data_offset": 0, 00:16:26.561 "data_size": 65536 00:16:26.561 }, 00:16:26.561 { 00:16:26.561 "name": "BaseBdev2", 00:16:26.561 "uuid": "d4ab2c6f-b0ef-48ec-b4aa-8d67eac33071", 00:16:26.561 "is_configured": true, 00:16:26.561 "data_offset": 0, 00:16:26.561 "data_size": 65536 00:16:26.561 } 00:16:26.561 ] 00:16:26.561 }' 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.561 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.820 [2024-11-20 05:26:58.531470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:26.820 [2024-11-20 05:26:58.531630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 59527 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 59527 ']' 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 59527 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:26.820 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59527 00:16:27.078 killing process with pid 59527 00:16:27.078 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:27.078 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:27.078 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59527' 00:16:27.078 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 59527 00:16:27.078 [2024-11-20 05:26:58.652832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.078 05:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 59527 00:16:27.078 [2024-11-20 05:26:58.661672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:27.643 00:16:27.643 real 0m3.649s 00:16:27.643 user 0m5.282s 00:16:27.643 sys 0m0.629s 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.643 ************************************ 00:16:27.643 END TEST raid_state_function_test 00:16:27.643 ************************************ 00:16:27.643 05:26:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:16:27.643 05:26:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:27.643 05:26:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:27.643 05:26:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.643 ************************************ 00:16:27.643 START TEST raid_state_function_test_sb 00:16:27.643 ************************************ 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:27.643 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:16:27.644 Process raid pid: 59769 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=59769 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59769' 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 59769 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 59769 ']' 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:27.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:27.644 05:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.644 [2024-11-20 05:26:59.387984] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:27.644 [2024-11-20 05:26:59.388104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.901 [2024-11-20 05:26:59.550941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.901 [2024-11-20 05:26:59.671126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.159 [2024-11-20 05:26:59.820042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.159 [2024-11-20 05:26:59.820084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.417 [2024-11-20 05:27:00.208109] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.417 [2024-11-20 05:27:00.208163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.417 [2024-11-20 05:27:00.208174] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.417 [2024-11-20 05:27:00.208184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.417 "name": "Existed_Raid", 00:16:28.417 "uuid": "c00bec42-ecba-49da-b0fd-d08169e7340f", 00:16:28.417 "strip_size_kb": 64, 00:16:28.417 "state": "configuring", 00:16:28.417 "raid_level": "raid0", 00:16:28.417 "superblock": true, 00:16:28.417 "num_base_bdevs": 2, 00:16:28.417 "num_base_bdevs_discovered": 0, 00:16:28.417 "num_base_bdevs_operational": 2, 00:16:28.417 "base_bdevs_list": [ 00:16:28.417 { 00:16:28.417 "name": "BaseBdev1", 00:16:28.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.417 "is_configured": false, 00:16:28.417 "data_offset": 0, 00:16:28.417 "data_size": 0 00:16:28.417 }, 00:16:28.417 { 00:16:28.417 "name": "BaseBdev2", 00:16:28.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.417 "is_configured": false, 00:16:28.417 "data_offset": 0, 00:16:28.417 "data_size": 0 00:16:28.417 } 00:16:28.417 ] 00:16:28.417 }' 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.417 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.675 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.675 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.675 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 [2024-11-20 05:27:00.512133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.934 [2024-11-20 05:27:00.512174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:28.934 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.934 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:28.934 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.934 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 [2024-11-20 05:27:00.520126] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.934 [2024-11-20 05:27:00.520165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.934 [2024-11-20 05:27:00.520174] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.934 [2024-11-20 05:27:00.520186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.934 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.935 [2024-11-20 05:27:00.554951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.935 BaseBdev1 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.935 [ 00:16:28.935 { 00:16:28.935 "name": "BaseBdev1", 00:16:28.935 "aliases": [ 00:16:28.935 "02561f9f-cc21-41c0-92dc-8014a9b6cb1e" 00:16:28.935 ], 00:16:28.935 "product_name": "Malloc disk", 00:16:28.935 "block_size": 512, 00:16:28.935 "num_blocks": 65536, 00:16:28.935 "uuid": "02561f9f-cc21-41c0-92dc-8014a9b6cb1e", 00:16:28.935 "assigned_rate_limits": { 00:16:28.935 "rw_ios_per_sec": 0, 00:16:28.935 "rw_mbytes_per_sec": 0, 00:16:28.935 "r_mbytes_per_sec": 0, 00:16:28.935 "w_mbytes_per_sec": 0 00:16:28.935 }, 00:16:28.935 "claimed": true, 00:16:28.935 "claim_type": "exclusive_write", 00:16:28.935 "zoned": false, 00:16:28.935 "supported_io_types": { 00:16:28.935 "read": true, 00:16:28.935 "write": true, 00:16:28.935 "unmap": true, 00:16:28.935 "flush": true, 00:16:28.935 "reset": true, 00:16:28.935 "nvme_admin": false, 00:16:28.935 "nvme_io": false, 00:16:28.935 "nvme_io_md": false, 00:16:28.935 "write_zeroes": true, 00:16:28.935 "zcopy": true, 00:16:28.935 "get_zone_info": false, 00:16:28.935 "zone_management": false, 00:16:28.935 "zone_append": false, 00:16:28.935 "compare": false, 00:16:28.935 "compare_and_write": false, 00:16:28.935 "abort": true, 00:16:28.935 "seek_hole": false, 00:16:28.935 "seek_data": false, 00:16:28.935 "copy": true, 00:16:28.935 "nvme_iov_md": false 00:16:28.935 }, 00:16:28.935 "memory_domains": [ 00:16:28.935 { 00:16:28.935 "dma_device_id": "system", 00:16:28.935 "dma_device_type": 1 00:16:28.935 }, 00:16:28.935 { 00:16:28.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.935 "dma_device_type": 2 00:16:28.935 } 00:16:28.935 ], 00:16:28.935 "driver_specific": {} 00:16:28.935 } 00:16:28.935 ] 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.935 "name": "Existed_Raid", 00:16:28.935 "uuid": "ba4e84d2-f882-47fc-8c7e-0e7898dc8967", 00:16:28.935 "strip_size_kb": 64, 00:16:28.935 "state": "configuring", 00:16:28.935 "raid_level": "raid0", 00:16:28.935 "superblock": true, 00:16:28.935 "num_base_bdevs": 2, 00:16:28.935 "num_base_bdevs_discovered": 1, 00:16:28.935 "num_base_bdevs_operational": 2, 00:16:28.935 "base_bdevs_list": [ 00:16:28.935 { 00:16:28.935 "name": "BaseBdev1", 00:16:28.935 "uuid": "02561f9f-cc21-41c0-92dc-8014a9b6cb1e", 00:16:28.935 "is_configured": true, 00:16:28.935 "data_offset": 2048, 00:16:28.935 "data_size": 63488 00:16:28.935 }, 00:16:28.935 { 00:16:28.935 "name": "BaseBdev2", 00:16:28.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.935 "is_configured": false, 00:16:28.935 "data_offset": 0, 00:16:28.935 "data_size": 0 00:16:28.935 } 00:16:28.935 ] 00:16:28.935 }' 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.935 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.193 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:29.193 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.193 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.193 [2024-11-20 05:27:00.911088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.194 [2024-11-20 05:27:00.911258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.194 [2024-11-20 05:27:00.919136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.194 [2024-11-20 05:27:00.921150] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.194 [2024-11-20 05:27:00.921195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.194 "name": "Existed_Raid", 00:16:29.194 "uuid": "908799a9-c133-45c1-9c29-50c0dd024d4c", 00:16:29.194 "strip_size_kb": 64, 00:16:29.194 "state": "configuring", 00:16:29.194 "raid_level": "raid0", 00:16:29.194 "superblock": true, 00:16:29.194 "num_base_bdevs": 2, 00:16:29.194 "num_base_bdevs_discovered": 1, 00:16:29.194 "num_base_bdevs_operational": 2, 00:16:29.194 "base_bdevs_list": [ 00:16:29.194 { 00:16:29.194 "name": "BaseBdev1", 00:16:29.194 "uuid": "02561f9f-cc21-41c0-92dc-8014a9b6cb1e", 00:16:29.194 "is_configured": true, 00:16:29.194 "data_offset": 2048, 00:16:29.194 "data_size": 63488 00:16:29.194 }, 00:16:29.194 { 00:16:29.194 "name": "BaseBdev2", 00:16:29.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.194 "is_configured": false, 00:16:29.194 "data_offset": 0, 00:16:29.194 "data_size": 0 00:16:29.194 } 00:16:29.194 ] 00:16:29.194 }' 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.194 05:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.452 [2024-11-20 05:27:01.259983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.452 [2024-11-20 05:27:01.260227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:29.452 [2024-11-20 05:27:01.260241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:29.452 BaseBdev2 00:16:29.452 [2024-11-20 05:27:01.260550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:29.452 [2024-11-20 05:27:01.260695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:29.452 [2024-11-20 05:27:01.260706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:29.452 [2024-11-20 05:27:01.260838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.452 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.452 [ 00:16:29.452 { 00:16:29.452 "name": "BaseBdev2", 00:16:29.452 "aliases": [ 00:16:29.453 "3af6777b-cbfb-4bce-a7dd-3a315468797a" 00:16:29.453 ], 00:16:29.453 "product_name": "Malloc disk", 00:16:29.453 "block_size": 512, 00:16:29.453 "num_blocks": 65536, 00:16:29.453 "uuid": "3af6777b-cbfb-4bce-a7dd-3a315468797a", 00:16:29.453 "assigned_rate_limits": { 00:16:29.453 "rw_ios_per_sec": 0, 00:16:29.453 "rw_mbytes_per_sec": 0, 00:16:29.453 "r_mbytes_per_sec": 0, 00:16:29.453 "w_mbytes_per_sec": 0 00:16:29.453 }, 00:16:29.453 "claimed": true, 00:16:29.453 "claim_type": "exclusive_write", 00:16:29.453 "zoned": false, 00:16:29.453 "supported_io_types": { 00:16:29.453 "read": true, 00:16:29.453 "write": true, 00:16:29.453 "unmap": true, 00:16:29.453 "flush": true, 00:16:29.453 "reset": true, 00:16:29.453 "nvme_admin": false, 00:16:29.453 "nvme_io": false, 00:16:29.453 "nvme_io_md": false, 00:16:29.453 "write_zeroes": true, 00:16:29.453 "zcopy": true, 00:16:29.453 "get_zone_info": false, 00:16:29.453 "zone_management": false, 00:16:29.453 "zone_append": false, 00:16:29.453 "compare": false, 00:16:29.453 "compare_and_write": false, 00:16:29.453 "abort": true, 00:16:29.711 "seek_hole": false, 00:16:29.711 "seek_data": false, 00:16:29.711 "copy": true, 00:16:29.711 "nvme_iov_md": false 00:16:29.711 }, 00:16:29.711 "memory_domains": [ 00:16:29.711 { 00:16:29.711 "dma_device_id": "system", 00:16:29.711 "dma_device_type": 1 00:16:29.711 }, 00:16:29.711 { 00:16:29.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.711 "dma_device_type": 2 00:16:29.711 } 00:16:29.711 ], 00:16:29.711 "driver_specific": {} 00:16:29.711 } 00:16:29.711 ] 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.711 "name": "Existed_Raid", 00:16:29.711 "uuid": "908799a9-c133-45c1-9c29-50c0dd024d4c", 00:16:29.711 "strip_size_kb": 64, 00:16:29.711 "state": "online", 00:16:29.711 "raid_level": "raid0", 00:16:29.711 "superblock": true, 00:16:29.711 "num_base_bdevs": 2, 00:16:29.711 "num_base_bdevs_discovered": 2, 00:16:29.711 "num_base_bdevs_operational": 2, 00:16:29.711 "base_bdevs_list": [ 00:16:29.711 { 00:16:29.711 "name": "BaseBdev1", 00:16:29.711 "uuid": "02561f9f-cc21-41c0-92dc-8014a9b6cb1e", 00:16:29.711 "is_configured": true, 00:16:29.711 "data_offset": 2048, 00:16:29.711 "data_size": 63488 00:16:29.711 }, 00:16:29.711 { 00:16:29.711 "name": "BaseBdev2", 00:16:29.711 "uuid": "3af6777b-cbfb-4bce-a7dd-3a315468797a", 00:16:29.711 "is_configured": true, 00:16:29.711 "data_offset": 2048, 00:16:29.711 "data_size": 63488 00:16:29.711 } 00:16:29.711 ] 00:16:29.711 }' 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.711 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.969 [2024-11-20 05:27:01.648453] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.969 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:29.969 "name": "Existed_Raid", 00:16:29.969 "aliases": [ 00:16:29.969 "908799a9-c133-45c1-9c29-50c0dd024d4c" 00:16:29.969 ], 00:16:29.969 "product_name": "Raid Volume", 00:16:29.969 "block_size": 512, 00:16:29.969 "num_blocks": 126976, 00:16:29.969 "uuid": "908799a9-c133-45c1-9c29-50c0dd024d4c", 00:16:29.969 "assigned_rate_limits": { 00:16:29.969 "rw_ios_per_sec": 0, 00:16:29.969 "rw_mbytes_per_sec": 0, 00:16:29.969 "r_mbytes_per_sec": 0, 00:16:29.969 "w_mbytes_per_sec": 0 00:16:29.969 }, 00:16:29.969 "claimed": false, 00:16:29.969 "zoned": false, 00:16:29.969 "supported_io_types": { 00:16:29.969 "read": true, 00:16:29.969 "write": true, 00:16:29.969 "unmap": true, 00:16:29.969 "flush": true, 00:16:29.969 "reset": true, 00:16:29.969 "nvme_admin": false, 00:16:29.969 "nvme_io": false, 00:16:29.969 "nvme_io_md": false, 00:16:29.969 "write_zeroes": true, 00:16:29.969 "zcopy": false, 00:16:29.969 "get_zone_info": false, 00:16:29.969 "zone_management": false, 00:16:29.969 "zone_append": false, 00:16:29.969 "compare": false, 00:16:29.970 "compare_and_write": false, 00:16:29.970 "abort": false, 00:16:29.970 "seek_hole": false, 00:16:29.970 "seek_data": false, 00:16:29.970 "copy": false, 00:16:29.970 "nvme_iov_md": false 00:16:29.970 }, 00:16:29.970 "memory_domains": [ 00:16:29.970 { 00:16:29.970 "dma_device_id": "system", 00:16:29.970 "dma_device_type": 1 00:16:29.970 }, 00:16:29.970 { 00:16:29.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.970 "dma_device_type": 2 00:16:29.970 }, 00:16:29.970 { 00:16:29.970 "dma_device_id": "system", 00:16:29.970 "dma_device_type": 1 00:16:29.970 }, 00:16:29.970 { 00:16:29.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.970 "dma_device_type": 2 00:16:29.970 } 00:16:29.970 ], 00:16:29.970 "driver_specific": { 00:16:29.970 "raid": { 00:16:29.970 "uuid": "908799a9-c133-45c1-9c29-50c0dd024d4c", 00:16:29.970 "strip_size_kb": 64, 00:16:29.970 "state": "online", 00:16:29.970 "raid_level": "raid0", 00:16:29.970 "superblock": true, 00:16:29.970 "num_base_bdevs": 2, 00:16:29.970 "num_base_bdevs_discovered": 2, 00:16:29.970 "num_base_bdevs_operational": 2, 00:16:29.970 "base_bdevs_list": [ 00:16:29.970 { 00:16:29.970 "name": "BaseBdev1", 00:16:29.970 "uuid": "02561f9f-cc21-41c0-92dc-8014a9b6cb1e", 00:16:29.970 "is_configured": true, 00:16:29.970 "data_offset": 2048, 00:16:29.970 "data_size": 63488 00:16:29.970 }, 00:16:29.970 { 00:16:29.970 "name": "BaseBdev2", 00:16:29.970 "uuid": "3af6777b-cbfb-4bce-a7dd-3a315468797a", 00:16:29.970 "is_configured": true, 00:16:29.970 "data_offset": 2048, 00:16:29.970 "data_size": 63488 00:16:29.970 } 00:16:29.970 ] 00:16:29.970 } 00:16:29.970 } 00:16:29.970 }' 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:29.970 BaseBdev2' 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.970 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.227 [2024-11-20 05:27:01.828190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.227 [2024-11-20 05:27:01.828335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.227 [2024-11-20 05:27:01.828415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.227 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.228 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.228 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.228 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.228 "name": "Existed_Raid", 00:16:30.228 "uuid": "908799a9-c133-45c1-9c29-50c0dd024d4c", 00:16:30.228 "strip_size_kb": 64, 00:16:30.228 "state": "offline", 00:16:30.228 "raid_level": "raid0", 00:16:30.228 "superblock": true, 00:16:30.228 "num_base_bdevs": 2, 00:16:30.228 "num_base_bdevs_discovered": 1, 00:16:30.228 "num_base_bdevs_operational": 1, 00:16:30.228 "base_bdevs_list": [ 00:16:30.228 { 00:16:30.228 "name": null, 00:16:30.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.228 "is_configured": false, 00:16:30.228 "data_offset": 0, 00:16:30.228 "data_size": 63488 00:16:30.228 }, 00:16:30.228 { 00:16:30.228 "name": "BaseBdev2", 00:16:30.228 "uuid": "3af6777b-cbfb-4bce-a7dd-3a315468797a", 00:16:30.228 "is_configured": true, 00:16:30.228 "data_offset": 2048, 00:16:30.228 "data_size": 63488 00:16:30.228 } 00:16:30.228 ] 00:16:30.228 }' 00:16:30.228 05:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.228 05:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.485 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:30.485 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:30.485 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.485 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.485 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.485 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:30.485 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.485 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:30.486 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:30.486 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:30.486 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.486 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.486 [2024-11-20 05:27:02.269588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:30.486 [2024-11-20 05:27:02.269658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 59769 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 59769 ']' 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 59769 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59769 00:16:30.744 killing process with pid 59769 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59769' 00:16:30.744 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 59769 00:16:30.745 [2024-11-20 05:27:02.387385] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.745 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 59769 00:16:30.745 [2024-11-20 05:27:02.396233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:31.312 05:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:31.312 00:16:31.312 real 0m3.662s 00:16:31.312 user 0m5.355s 00:16:31.312 sys 0m0.574s 00:16:31.312 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:31.312 05:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.312 ************************************ 00:16:31.312 END TEST raid_state_function_test_sb 00:16:31.312 ************************************ 00:16:31.312 05:27:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:16:31.312 05:27:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:31.312 05:27:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:31.312 05:27:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.312 ************************************ 00:16:31.312 START TEST raid_superblock_test 00:16:31.312 ************************************ 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60007 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60007 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60007 ']' 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.312 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:31.312 [2024-11-20 05:27:03.090331] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:31.312 [2024-11-20 05:27:03.090469] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60007 ] 00:16:31.570 [2024-11-20 05:27:03.247086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.570 [2024-11-20 05:27:03.351040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.828 [2024-11-20 05:27:03.474170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.828 [2024-11-20 05:27:03.474237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.086 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:32.086 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:32.086 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.344 malloc1 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.344 [2024-11-20 05:27:03.954427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.344 [2024-11-20 05:27:03.954500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.344 [2024-11-20 05:27:03.954520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:32.344 [2024-11-20 05:27:03.954529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.344 [2024-11-20 05:27:03.956544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.344 [2024-11-20 05:27:03.956581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.344 pt1 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.344 malloc2 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.344 [2024-11-20 05:27:03.992421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.344 [2024-11-20 05:27:03.992481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.344 [2024-11-20 05:27:03.992503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:32.344 [2024-11-20 05:27:03.992512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.344 [2024-11-20 05:27:03.994433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.344 [2024-11-20 05:27:03.994464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.344 pt2 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.344 05:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.344 [2024-11-20 05:27:04.000475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.344 [2024-11-20 05:27:04.002126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.344 [2024-11-20 05:27:04.002265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:32.344 [2024-11-20 05:27:04.002280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:32.344 [2024-11-20 05:27:04.002518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:32.344 [2024-11-20 05:27:04.002645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:32.344 [2024-11-20 05:27:04.002660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:32.344 [2024-11-20 05:27:04.002790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.344 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.344 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.345 "name": "raid_bdev1", 00:16:32.345 "uuid": "890a5ff4-0f1a-485a-8aee-44e29205b80f", 00:16:32.345 "strip_size_kb": 64, 00:16:32.345 "state": "online", 00:16:32.345 "raid_level": "raid0", 00:16:32.345 "superblock": true, 00:16:32.345 "num_base_bdevs": 2, 00:16:32.345 "num_base_bdevs_discovered": 2, 00:16:32.345 "num_base_bdevs_operational": 2, 00:16:32.345 "base_bdevs_list": [ 00:16:32.345 { 00:16:32.345 "name": "pt1", 00:16:32.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.345 "is_configured": true, 00:16:32.345 "data_offset": 2048, 00:16:32.345 "data_size": 63488 00:16:32.345 }, 00:16:32.345 { 00:16:32.345 "name": "pt2", 00:16:32.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.345 "is_configured": true, 00:16:32.345 "data_offset": 2048, 00:16:32.345 "data_size": 63488 00:16:32.345 } 00:16:32.345 ] 00:16:32.345 }' 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.345 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.603 [2024-11-20 05:27:04.312777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.603 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.603 "name": "raid_bdev1", 00:16:32.603 "aliases": [ 00:16:32.603 "890a5ff4-0f1a-485a-8aee-44e29205b80f" 00:16:32.603 ], 00:16:32.603 "product_name": "Raid Volume", 00:16:32.603 "block_size": 512, 00:16:32.603 "num_blocks": 126976, 00:16:32.603 "uuid": "890a5ff4-0f1a-485a-8aee-44e29205b80f", 00:16:32.603 "assigned_rate_limits": { 00:16:32.603 "rw_ios_per_sec": 0, 00:16:32.603 "rw_mbytes_per_sec": 0, 00:16:32.603 "r_mbytes_per_sec": 0, 00:16:32.603 "w_mbytes_per_sec": 0 00:16:32.603 }, 00:16:32.603 "claimed": false, 00:16:32.603 "zoned": false, 00:16:32.603 "supported_io_types": { 00:16:32.603 "read": true, 00:16:32.603 "write": true, 00:16:32.603 "unmap": true, 00:16:32.603 "flush": true, 00:16:32.603 "reset": true, 00:16:32.603 "nvme_admin": false, 00:16:32.603 "nvme_io": false, 00:16:32.603 "nvme_io_md": false, 00:16:32.603 "write_zeroes": true, 00:16:32.603 "zcopy": false, 00:16:32.603 "get_zone_info": false, 00:16:32.603 "zone_management": false, 00:16:32.603 "zone_append": false, 00:16:32.603 "compare": false, 00:16:32.603 "compare_and_write": false, 00:16:32.603 "abort": false, 00:16:32.603 "seek_hole": false, 00:16:32.603 "seek_data": false, 00:16:32.603 "copy": false, 00:16:32.603 "nvme_iov_md": false 00:16:32.603 }, 00:16:32.603 "memory_domains": [ 00:16:32.603 { 00:16:32.603 "dma_device_id": "system", 00:16:32.603 "dma_device_type": 1 00:16:32.603 }, 00:16:32.603 { 00:16:32.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.603 "dma_device_type": 2 00:16:32.603 }, 00:16:32.603 { 00:16:32.603 "dma_device_id": "system", 00:16:32.603 "dma_device_type": 1 00:16:32.603 }, 00:16:32.603 { 00:16:32.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.603 "dma_device_type": 2 00:16:32.603 } 00:16:32.603 ], 00:16:32.603 "driver_specific": { 00:16:32.603 "raid": { 00:16:32.603 "uuid": "890a5ff4-0f1a-485a-8aee-44e29205b80f", 00:16:32.603 "strip_size_kb": 64, 00:16:32.603 "state": "online", 00:16:32.603 "raid_level": "raid0", 00:16:32.603 "superblock": true, 00:16:32.603 "num_base_bdevs": 2, 00:16:32.603 "num_base_bdevs_discovered": 2, 00:16:32.603 "num_base_bdevs_operational": 2, 00:16:32.603 "base_bdevs_list": [ 00:16:32.603 { 00:16:32.603 "name": "pt1", 00:16:32.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.603 "is_configured": true, 00:16:32.603 "data_offset": 2048, 00:16:32.603 "data_size": 63488 00:16:32.603 }, 00:16:32.603 { 00:16:32.603 "name": "pt2", 00:16:32.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.604 "is_configured": true, 00:16:32.604 "data_offset": 2048, 00:16:32.604 "data_size": 63488 00:16:32.604 } 00:16:32.604 ] 00:16:32.604 } 00:16:32.604 } 00:16:32.604 }' 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:32.604 pt2' 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.604 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.941 [2024-11-20 05:27:04.488806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=890a5ff4-0f1a-485a-8aee-44e29205b80f 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 890a5ff4-0f1a-485a-8aee-44e29205b80f ']' 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.941 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 [2024-11-20 05:27:04.512519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.942 [2024-11-20 05:27:04.512550] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.942 [2024-11-20 05:27:04.512633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.942 [2024-11-20 05:27:04.512681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.942 [2024-11-20 05:27:04.512697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 [2024-11-20 05:27:04.604576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:32.942 [2024-11-20 05:27:04.606294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:32.942 [2024-11-20 05:27:04.606375] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:32.942 [2024-11-20 05:27:04.606423] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:32.942 [2024-11-20 05:27:04.606435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.942 [2024-11-20 05:27:04.606447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:32.942 request: 00:16:32.942 { 00:16:32.942 "name": "raid_bdev1", 00:16:32.942 "raid_level": "raid0", 00:16:32.942 "base_bdevs": [ 00:16:32.942 "malloc1", 00:16:32.942 "malloc2" 00:16:32.942 ], 00:16:32.942 "strip_size_kb": 64, 00:16:32.942 "superblock": false, 00:16:32.942 "method": "bdev_raid_create", 00:16:32.942 "req_id": 1 00:16:32.942 } 00:16:32.942 Got JSON-RPC error response 00:16:32.942 response: 00:16:32.942 { 00:16:32.942 "code": -17, 00:16:32.942 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:32.942 } 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 [2024-11-20 05:27:04.644566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.942 [2024-11-20 05:27:04.644630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.942 [2024-11-20 05:27:04.644650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:32.942 [2024-11-20 05:27:04.644659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.942 [2024-11-20 05:27:04.646655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.942 [2024-11-20 05:27:04.646690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.942 [2024-11-20 05:27:04.646772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:32.942 [2024-11-20 05:27:04.646825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.942 pt1 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.942 "name": "raid_bdev1", 00:16:32.942 "uuid": "890a5ff4-0f1a-485a-8aee-44e29205b80f", 00:16:32.942 "strip_size_kb": 64, 00:16:32.942 "state": "configuring", 00:16:32.942 "raid_level": "raid0", 00:16:32.942 "superblock": true, 00:16:32.942 "num_base_bdevs": 2, 00:16:32.942 "num_base_bdevs_discovered": 1, 00:16:32.942 "num_base_bdevs_operational": 2, 00:16:32.942 "base_bdevs_list": [ 00:16:32.942 { 00:16:32.942 "name": "pt1", 00:16:32.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.942 "is_configured": true, 00:16:32.942 "data_offset": 2048, 00:16:32.942 "data_size": 63488 00:16:32.942 }, 00:16:32.942 { 00:16:32.942 "name": null, 00:16:32.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.942 "is_configured": false, 00:16:32.942 "data_offset": 2048, 00:16:32.942 "data_size": 63488 00:16:32.942 } 00:16:32.942 ] 00:16:32.942 }' 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.942 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.203 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:33.203 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:33.203 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.203 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.203 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.203 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.203 [2024-11-20 05:27:04.976638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.203 [2024-11-20 05:27:04.976714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.203 [2024-11-20 05:27:04.976732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:33.203 [2024-11-20 05:27:04.976741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.203 [2024-11-20 05:27:04.977154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.203 [2024-11-20 05:27:04.977174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.203 [2024-11-20 05:27:04.977244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:33.203 [2024-11-20 05:27:04.977265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.203 [2024-11-20 05:27:04.977360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:33.203 [2024-11-20 05:27:04.977386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:33.203 [2024-11-20 05:27:04.977591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:33.203 [2024-11-20 05:27:04.977708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:33.203 [2024-11-20 05:27:04.977720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:33.203 [2024-11-20 05:27:04.977830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.203 pt2 00:16:33.203 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.203 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:33.203 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.204 05:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.204 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.204 "name": "raid_bdev1", 00:16:33.204 "uuid": "890a5ff4-0f1a-485a-8aee-44e29205b80f", 00:16:33.204 "strip_size_kb": 64, 00:16:33.204 "state": "online", 00:16:33.204 "raid_level": "raid0", 00:16:33.204 "superblock": true, 00:16:33.204 "num_base_bdevs": 2, 00:16:33.204 "num_base_bdevs_discovered": 2, 00:16:33.204 "num_base_bdevs_operational": 2, 00:16:33.204 "base_bdevs_list": [ 00:16:33.204 { 00:16:33.204 "name": "pt1", 00:16:33.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.204 "is_configured": true, 00:16:33.204 "data_offset": 2048, 00:16:33.204 "data_size": 63488 00:16:33.204 }, 00:16:33.204 { 00:16:33.204 "name": "pt2", 00:16:33.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.204 "is_configured": true, 00:16:33.204 "data_offset": 2048, 00:16:33.204 "data_size": 63488 00:16:33.204 } 00:16:33.204 ] 00:16:33.204 }' 00:16:33.204 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.204 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:33.774 [2024-11-20 05:27:05.304915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.774 "name": "raid_bdev1", 00:16:33.774 "aliases": [ 00:16:33.774 "890a5ff4-0f1a-485a-8aee-44e29205b80f" 00:16:33.774 ], 00:16:33.774 "product_name": "Raid Volume", 00:16:33.774 "block_size": 512, 00:16:33.774 "num_blocks": 126976, 00:16:33.774 "uuid": "890a5ff4-0f1a-485a-8aee-44e29205b80f", 00:16:33.774 "assigned_rate_limits": { 00:16:33.774 "rw_ios_per_sec": 0, 00:16:33.774 "rw_mbytes_per_sec": 0, 00:16:33.774 "r_mbytes_per_sec": 0, 00:16:33.774 "w_mbytes_per_sec": 0 00:16:33.774 }, 00:16:33.774 "claimed": false, 00:16:33.774 "zoned": false, 00:16:33.774 "supported_io_types": { 00:16:33.774 "read": true, 00:16:33.774 "write": true, 00:16:33.774 "unmap": true, 00:16:33.774 "flush": true, 00:16:33.774 "reset": true, 00:16:33.774 "nvme_admin": false, 00:16:33.774 "nvme_io": false, 00:16:33.774 "nvme_io_md": false, 00:16:33.774 "write_zeroes": true, 00:16:33.774 "zcopy": false, 00:16:33.774 "get_zone_info": false, 00:16:33.774 "zone_management": false, 00:16:33.774 "zone_append": false, 00:16:33.774 "compare": false, 00:16:33.774 "compare_and_write": false, 00:16:33.774 "abort": false, 00:16:33.774 "seek_hole": false, 00:16:33.774 "seek_data": false, 00:16:33.774 "copy": false, 00:16:33.774 "nvme_iov_md": false 00:16:33.774 }, 00:16:33.774 "memory_domains": [ 00:16:33.774 { 00:16:33.774 "dma_device_id": "system", 00:16:33.774 "dma_device_type": 1 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.774 "dma_device_type": 2 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "dma_device_id": "system", 00:16:33.774 "dma_device_type": 1 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.774 "dma_device_type": 2 00:16:33.774 } 00:16:33.774 ], 00:16:33.774 "driver_specific": { 00:16:33.774 "raid": { 00:16:33.774 "uuid": "890a5ff4-0f1a-485a-8aee-44e29205b80f", 00:16:33.774 "strip_size_kb": 64, 00:16:33.774 "state": "online", 00:16:33.774 "raid_level": "raid0", 00:16:33.774 "superblock": true, 00:16:33.774 "num_base_bdevs": 2, 00:16:33.774 "num_base_bdevs_discovered": 2, 00:16:33.774 "num_base_bdevs_operational": 2, 00:16:33.774 "base_bdevs_list": [ 00:16:33.774 { 00:16:33.774 "name": "pt1", 00:16:33.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "name": "pt2", 00:16:33.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 } 00:16:33.774 ] 00:16:33.774 } 00:16:33.774 } 00:16:33.774 }' 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:33.774 pt2' 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:33.774 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.775 [2024-11-20 05:27:05.464950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 890a5ff4-0f1a-485a-8aee-44e29205b80f '!=' 890a5ff4-0f1a-485a-8aee-44e29205b80f ']' 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60007 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60007 ']' 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60007 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60007 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:33.775 killing process with pid 60007 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60007' 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 60007 00:16:33.775 [2024-11-20 05:27:05.509473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.775 05:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 60007 00:16:33.775 [2024-11-20 05:27:05.509589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.775 [2024-11-20 05:27:05.509641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.775 [2024-11-20 05:27:05.509652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:34.033 [2024-11-20 05:27:05.618265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.598 05:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:34.598 00:16:34.598 real 0m3.203s 00:16:34.598 user 0m4.544s 00:16:34.598 sys 0m0.508s 00:16:34.598 05:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:34.598 05:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.598 ************************************ 00:16:34.598 END TEST raid_superblock_test 00:16:34.598 ************************************ 00:16:34.598 05:27:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:16:34.598 05:27:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:34.598 05:27:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:34.598 05:27:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.598 ************************************ 00:16:34.598 START TEST raid_read_error_test 00:16:34.598 ************************************ 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:34.598 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WnLWoNaXOt 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60205 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60205 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 60205 ']' 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:34.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:34.599 05:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.599 [2024-11-20 05:27:06.355724] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:34.599 [2024-11-20 05:27:06.355925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:16:34.856 [2024-11-20 05:27:06.527339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.856 [2024-11-20 05:27:06.630358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.114 [2024-11-20 05:27:06.752253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.114 [2024-11-20 05:27:06.752304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.373 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:35.373 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:35.373 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:35.373 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:35.373 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.373 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.632 BaseBdev1_malloc 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.632 true 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.632 [2024-11-20 05:27:07.227999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:35.632 [2024-11-20 05:27:07.228055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.632 [2024-11-20 05:27:07.228075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:35.632 [2024-11-20 05:27:07.228086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.632 [2024-11-20 05:27:07.229961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.632 [2024-11-20 05:27:07.229993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:35.632 BaseBdev1 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.632 BaseBdev2_malloc 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.632 true 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.632 [2024-11-20 05:27:07.269922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:35.632 [2024-11-20 05:27:07.269978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.632 [2024-11-20 05:27:07.269993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:35.632 [2024-11-20 05:27:07.270003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.632 [2024-11-20 05:27:07.271895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.632 [2024-11-20 05:27:07.271926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:35.632 BaseBdev2 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.632 [2024-11-20 05:27:07.277974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.632 [2024-11-20 05:27:07.279608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.632 [2024-11-20 05:27:07.279769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:35.632 [2024-11-20 05:27:07.279787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:35.632 [2024-11-20 05:27:07.280014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:35.632 [2024-11-20 05:27:07.280150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:35.632 [2024-11-20 05:27:07.280165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:35.632 [2024-11-20 05:27:07.280287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.632 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.633 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.633 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.633 "name": "raid_bdev1", 00:16:35.633 "uuid": "8650eb76-7650-47c1-b02a-2b9742edd21a", 00:16:35.633 "strip_size_kb": 64, 00:16:35.633 "state": "online", 00:16:35.633 "raid_level": "raid0", 00:16:35.633 "superblock": true, 00:16:35.633 "num_base_bdevs": 2, 00:16:35.633 "num_base_bdevs_discovered": 2, 00:16:35.633 "num_base_bdevs_operational": 2, 00:16:35.633 "base_bdevs_list": [ 00:16:35.633 { 00:16:35.633 "name": "BaseBdev1", 00:16:35.633 "uuid": "8f82c067-6302-53e8-a56e-4350c5593065", 00:16:35.633 "is_configured": true, 00:16:35.633 "data_offset": 2048, 00:16:35.633 "data_size": 63488 00:16:35.633 }, 00:16:35.633 { 00:16:35.633 "name": "BaseBdev2", 00:16:35.633 "uuid": "322cf838-0ec0-5b5b-8a62-afc9616c77a7", 00:16:35.633 "is_configured": true, 00:16:35.633 "data_offset": 2048, 00:16:35.633 "data_size": 63488 00:16:35.633 } 00:16:35.633 ] 00:16:35.633 }' 00:16:35.633 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.633 05:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.891 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:35.891 05:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:36.149 [2024-11-20 05:27:07.747036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.082 "name": "raid_bdev1", 00:16:37.082 "uuid": "8650eb76-7650-47c1-b02a-2b9742edd21a", 00:16:37.082 "strip_size_kb": 64, 00:16:37.082 "state": "online", 00:16:37.082 "raid_level": "raid0", 00:16:37.082 "superblock": true, 00:16:37.082 "num_base_bdevs": 2, 00:16:37.082 "num_base_bdevs_discovered": 2, 00:16:37.082 "num_base_bdevs_operational": 2, 00:16:37.082 "base_bdevs_list": [ 00:16:37.082 { 00:16:37.082 "name": "BaseBdev1", 00:16:37.082 "uuid": "8f82c067-6302-53e8-a56e-4350c5593065", 00:16:37.082 "is_configured": true, 00:16:37.082 "data_offset": 2048, 00:16:37.082 "data_size": 63488 00:16:37.082 }, 00:16:37.082 { 00:16:37.082 "name": "BaseBdev2", 00:16:37.082 "uuid": "322cf838-0ec0-5b5b-8a62-afc9616c77a7", 00:16:37.082 "is_configured": true, 00:16:37.082 "data_offset": 2048, 00:16:37.082 "data_size": 63488 00:16:37.082 } 00:16:37.082 ] 00:16:37.082 }' 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.082 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.340 [2024-11-20 05:27:08.952143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.340 [2024-11-20 05:27:08.952186] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.340 [2024-11-20 05:27:08.954633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.340 [2024-11-20 05:27:08.954679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.340 [2024-11-20 05:27:08.954711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.340 [2024-11-20 05:27:08.954721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60205 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 60205 ']' 00:16:37.340 { 00:16:37.340 "results": [ 00:16:37.340 { 00:16:37.340 "job": "raid_bdev1", 00:16:37.340 "core_mask": "0x1", 00:16:37.340 "workload": "randrw", 00:16:37.340 "percentage": 50, 00:16:37.340 "status": "finished", 00:16:37.340 "queue_depth": 1, 00:16:37.340 "io_size": 131072, 00:16:37.340 "runtime": 1.203274, 00:16:37.340 "iops": 16972.02798365127, 00:16:37.340 "mibps": 2121.503497956409, 00:16:37.340 "io_failed": 1, 00:16:37.340 "io_timeout": 0, 00:16:37.340 "avg_latency_us": 81.54821524751506, 00:16:37.340 "min_latency_us": 25.6, 00:16:37.340 "max_latency_us": 1380.0369230769231 00:16:37.340 } 00:16:37.340 ], 00:16:37.340 "core_count": 1 00:16:37.340 } 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 60205 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60205 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:37.340 killing process with pid 60205 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60205' 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 60205 00:16:37.340 [2024-11-20 05:27:08.989520] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.340 05:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 60205 00:16:37.340 [2024-11-20 05:27:09.060762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WnLWoNaXOt 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.83 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.83 != \0\.\0\0 ]] 00:16:37.906 00:16:37.906 real 0m3.452s 00:16:37.906 user 0m4.173s 00:16:37.906 sys 0m0.452s 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:37.906 ************************************ 00:16:37.906 END TEST raid_read_error_test 00:16:37.906 05:27:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.906 ************************************ 00:16:38.164 05:27:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:16:38.164 05:27:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:38.164 05:27:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:38.164 05:27:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.164 ************************************ 00:16:38.164 START TEST raid_write_error_test 00:16:38.164 ************************************ 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:38.164 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TZ5LRBh7ud 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60339 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60339 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 60339 ']' 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:38.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:38.165 05:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.165 [2024-11-20 05:27:09.825003] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:38.165 [2024-11-20 05:27:09.825121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60339 ] 00:16:38.165 [2024-11-20 05:27:09.978520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.423 [2024-11-20 05:27:10.082833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.423 [2024-11-20 05:27:10.206251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.423 [2024-11-20 05:27:10.206324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.988 BaseBdev1_malloc 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.988 true 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.988 [2024-11-20 05:27:10.730293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:38.988 [2024-11-20 05:27:10.730352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.988 [2024-11-20 05:27:10.730380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:38.988 [2024-11-20 05:27:10.730390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.988 [2024-11-20 05:27:10.732327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.988 [2024-11-20 05:27:10.732385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:38.988 BaseBdev1 00:16:38.988 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.989 BaseBdev2_malloc 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.989 true 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.989 [2024-11-20 05:27:10.772400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:38.989 [2024-11-20 05:27:10.772456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.989 [2024-11-20 05:27:10.772472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:38.989 [2024-11-20 05:27:10.772481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.989 [2024-11-20 05:27:10.774411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.989 [2024-11-20 05:27:10.774443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:38.989 BaseBdev2 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.989 [2024-11-20 05:27:10.780454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.989 [2024-11-20 05:27:10.782110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.989 [2024-11-20 05:27:10.782279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:38.989 [2024-11-20 05:27:10.782293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:38.989 [2024-11-20 05:27:10.782525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:38.989 [2024-11-20 05:27:10.782667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.989 [2024-11-20 05:27:10.782678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:38.989 [2024-11-20 05:27:10.782811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.989 "name": "raid_bdev1", 00:16:38.989 "uuid": "1c61fe7c-25c2-4e51-a9db-4e1b7618f82b", 00:16:38.989 "strip_size_kb": 64, 00:16:38.989 "state": "online", 00:16:38.989 "raid_level": "raid0", 00:16:38.989 "superblock": true, 00:16:38.989 "num_base_bdevs": 2, 00:16:38.989 "num_base_bdevs_discovered": 2, 00:16:38.989 "num_base_bdevs_operational": 2, 00:16:38.989 "base_bdevs_list": [ 00:16:38.989 { 00:16:38.989 "name": "BaseBdev1", 00:16:38.989 "uuid": "c2dfdc34-776d-5fc9-899d-a54b90012dc0", 00:16:38.989 "is_configured": true, 00:16:38.989 "data_offset": 2048, 00:16:38.989 "data_size": 63488 00:16:38.989 }, 00:16:38.989 { 00:16:38.989 "name": "BaseBdev2", 00:16:38.989 "uuid": "67463c7c-17b1-55f0-a3d2-f21c45a2f465", 00:16:38.989 "is_configured": true, 00:16:38.989 "data_offset": 2048, 00:16:38.989 "data_size": 63488 00:16:38.989 } 00:16:38.989 ] 00:16:38.989 }' 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.989 05:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.555 05:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:39.555 05:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:39.555 [2024-11-20 05:27:11.217385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.491 "name": "raid_bdev1", 00:16:40.491 "uuid": "1c61fe7c-25c2-4e51-a9db-4e1b7618f82b", 00:16:40.491 "strip_size_kb": 64, 00:16:40.491 "state": "online", 00:16:40.491 "raid_level": "raid0", 00:16:40.491 "superblock": true, 00:16:40.491 "num_base_bdevs": 2, 00:16:40.491 "num_base_bdevs_discovered": 2, 00:16:40.491 "num_base_bdevs_operational": 2, 00:16:40.491 "base_bdevs_list": [ 00:16:40.491 { 00:16:40.491 "name": "BaseBdev1", 00:16:40.491 "uuid": "c2dfdc34-776d-5fc9-899d-a54b90012dc0", 00:16:40.491 "is_configured": true, 00:16:40.491 "data_offset": 2048, 00:16:40.491 "data_size": 63488 00:16:40.491 }, 00:16:40.491 { 00:16:40.491 "name": "BaseBdev2", 00:16:40.491 "uuid": "67463c7c-17b1-55f0-a3d2-f21c45a2f465", 00:16:40.491 "is_configured": true, 00:16:40.491 "data_offset": 2048, 00:16:40.491 "data_size": 63488 00:16:40.491 } 00:16:40.491 ] 00:16:40.491 }' 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.491 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.749 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.749 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.750 [2024-11-20 05:27:12.489408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.750 [2024-11-20 05:27:12.489451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.750 [2024-11-20 05:27:12.491982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.750 [2024-11-20 05:27:12.492030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.750 [2024-11-20 05:27:12.492063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.750 [2024-11-20 05:27:12.492073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:40.750 { 00:16:40.750 "results": [ 00:16:40.750 { 00:16:40.750 "job": "raid_bdev1", 00:16:40.750 "core_mask": "0x1", 00:16:40.750 "workload": "randrw", 00:16:40.750 "percentage": 50, 00:16:40.750 "status": "finished", 00:16:40.750 "queue_depth": 1, 00:16:40.750 "io_size": 131072, 00:16:40.750 "runtime": 1.270362, 00:16:40.750 "iops": 17010.112078289494, 00:16:40.750 "mibps": 2126.2640097861868, 00:16:40.750 "io_failed": 1, 00:16:40.750 "io_timeout": 0, 00:16:40.750 "avg_latency_us": 81.29945538034386, 00:16:40.750 "min_latency_us": 25.993846153846153, 00:16:40.750 "max_latency_us": 1405.2430769230768 00:16:40.750 } 00:16:40.750 ], 00:16:40.750 "core_count": 1 00:16:40.750 } 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60339 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 60339 ']' 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 60339 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60339 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:40.750 killing process with pid 60339 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60339' 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 60339 00:16:40.750 05:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 60339 00:16:40.750 [2024-11-20 05:27:12.520102] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.008 [2024-11-20 05:27:12.592867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TZ5LRBh7ud 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:16:41.576 00:16:41.576 real 0m3.482s 00:16:41.576 user 0m4.230s 00:16:41.576 sys 0m0.406s 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:41.576 05:27:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 ************************************ 00:16:41.576 END TEST raid_write_error_test 00:16:41.576 ************************************ 00:16:41.576 05:27:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:41.576 05:27:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:16:41.576 05:27:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:41.576 05:27:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:41.576 05:27:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 ************************************ 00:16:41.576 START TEST raid_state_function_test 00:16:41.576 ************************************ 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60472 00:16:41.576 Process raid pid: 60472 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60472' 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60472 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60472 ']' 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:41.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 05:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:41.576 [2024-11-20 05:27:13.349900] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:41.576 [2024-11-20 05:27:13.350020] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.835 [2024-11-20 05:27:13.506722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.835 [2024-11-20 05:27:13.609285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.092 [2024-11-20 05:27:13.733052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.092 [2024-11-20 05:27:13.733092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.389 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:42.389 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:42.389 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:42.389 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.389 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.389 [2024-11-20 05:27:14.160811] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.389 [2024-11-20 05:27:14.160866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.389 [2024-11-20 05:27:14.160876] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.389 [2024-11-20 05:27:14.160884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.390 "name": "Existed_Raid", 00:16:42.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.390 "strip_size_kb": 64, 00:16:42.390 "state": "configuring", 00:16:42.390 "raid_level": "concat", 00:16:42.390 "superblock": false, 00:16:42.390 "num_base_bdevs": 2, 00:16:42.390 "num_base_bdevs_discovered": 0, 00:16:42.390 "num_base_bdevs_operational": 2, 00:16:42.390 "base_bdevs_list": [ 00:16:42.390 { 00:16:42.390 "name": "BaseBdev1", 00:16:42.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.390 "is_configured": false, 00:16:42.390 "data_offset": 0, 00:16:42.390 "data_size": 0 00:16:42.390 }, 00:16:42.390 { 00:16:42.390 "name": "BaseBdev2", 00:16:42.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.390 "is_configured": false, 00:16:42.390 "data_offset": 0, 00:16:42.390 "data_size": 0 00:16:42.390 } 00:16:42.390 ] 00:16:42.390 }' 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.390 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.959 [2024-11-20 05:27:14.488848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.959 [2024-11-20 05:27:14.488891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.959 [2024-11-20 05:27:14.496836] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.959 [2024-11-20 05:27:14.496882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.959 [2024-11-20 05:27:14.496890] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.959 [2024-11-20 05:27:14.496901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.959 [2024-11-20 05:27:14.527314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.959 BaseBdev1 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.959 [ 00:16:42.959 { 00:16:42.959 "name": "BaseBdev1", 00:16:42.959 "aliases": [ 00:16:42.959 "5fb5966d-2073-4152-959f-7a1b099464e4" 00:16:42.959 ], 00:16:42.959 "product_name": "Malloc disk", 00:16:42.959 "block_size": 512, 00:16:42.959 "num_blocks": 65536, 00:16:42.959 "uuid": "5fb5966d-2073-4152-959f-7a1b099464e4", 00:16:42.959 "assigned_rate_limits": { 00:16:42.959 "rw_ios_per_sec": 0, 00:16:42.959 "rw_mbytes_per_sec": 0, 00:16:42.959 "r_mbytes_per_sec": 0, 00:16:42.959 "w_mbytes_per_sec": 0 00:16:42.959 }, 00:16:42.959 "claimed": true, 00:16:42.959 "claim_type": "exclusive_write", 00:16:42.959 "zoned": false, 00:16:42.959 "supported_io_types": { 00:16:42.959 "read": true, 00:16:42.959 "write": true, 00:16:42.959 "unmap": true, 00:16:42.959 "flush": true, 00:16:42.959 "reset": true, 00:16:42.959 "nvme_admin": false, 00:16:42.959 "nvme_io": false, 00:16:42.959 "nvme_io_md": false, 00:16:42.959 "write_zeroes": true, 00:16:42.959 "zcopy": true, 00:16:42.959 "get_zone_info": false, 00:16:42.959 "zone_management": false, 00:16:42.959 "zone_append": false, 00:16:42.959 "compare": false, 00:16:42.959 "compare_and_write": false, 00:16:42.959 "abort": true, 00:16:42.959 "seek_hole": false, 00:16:42.959 "seek_data": false, 00:16:42.959 "copy": true, 00:16:42.959 "nvme_iov_md": false 00:16:42.959 }, 00:16:42.959 "memory_domains": [ 00:16:42.959 { 00:16:42.959 "dma_device_id": "system", 00:16:42.959 "dma_device_type": 1 00:16:42.959 }, 00:16:42.959 { 00:16:42.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.959 "dma_device_type": 2 00:16:42.959 } 00:16:42.959 ], 00:16:42.959 "driver_specific": {} 00:16:42.959 } 00:16:42.959 ] 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.959 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.960 "name": "Existed_Raid", 00:16:42.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.960 "strip_size_kb": 64, 00:16:42.960 "state": "configuring", 00:16:42.960 "raid_level": "concat", 00:16:42.960 "superblock": false, 00:16:42.960 "num_base_bdevs": 2, 00:16:42.960 "num_base_bdevs_discovered": 1, 00:16:42.960 "num_base_bdevs_operational": 2, 00:16:42.960 "base_bdevs_list": [ 00:16:42.960 { 00:16:42.960 "name": "BaseBdev1", 00:16:42.960 "uuid": "5fb5966d-2073-4152-959f-7a1b099464e4", 00:16:42.960 "is_configured": true, 00:16:42.960 "data_offset": 0, 00:16:42.960 "data_size": 65536 00:16:42.960 }, 00:16:42.960 { 00:16:42.960 "name": "BaseBdev2", 00:16:42.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.960 "is_configured": false, 00:16:42.960 "data_offset": 0, 00:16:42.960 "data_size": 0 00:16:42.960 } 00:16:42.960 ] 00:16:42.960 }' 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.960 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.219 [2024-11-20 05:27:14.863440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:43.219 [2024-11-20 05:27:14.863503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.219 [2024-11-20 05:27:14.871475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.219 [2024-11-20 05:27:14.873182] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.219 [2024-11-20 05:27:14.873225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.219 "name": "Existed_Raid", 00:16:43.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.219 "strip_size_kb": 64, 00:16:43.219 "state": "configuring", 00:16:43.219 "raid_level": "concat", 00:16:43.219 "superblock": false, 00:16:43.219 "num_base_bdevs": 2, 00:16:43.219 "num_base_bdevs_discovered": 1, 00:16:43.219 "num_base_bdevs_operational": 2, 00:16:43.219 "base_bdevs_list": [ 00:16:43.219 { 00:16:43.219 "name": "BaseBdev1", 00:16:43.219 "uuid": "5fb5966d-2073-4152-959f-7a1b099464e4", 00:16:43.219 "is_configured": true, 00:16:43.219 "data_offset": 0, 00:16:43.219 "data_size": 65536 00:16:43.219 }, 00:16:43.219 { 00:16:43.219 "name": "BaseBdev2", 00:16:43.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.219 "is_configured": false, 00:16:43.219 "data_offset": 0, 00:16:43.219 "data_size": 0 00:16:43.219 } 00:16:43.219 ] 00:16:43.219 }' 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.219 05:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.478 [2024-11-20 05:27:15.228288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.478 [2024-11-20 05:27:15.228343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:43.478 [2024-11-20 05:27:15.228350] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:43.478 [2024-11-20 05:27:15.228675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:43.478 [2024-11-20 05:27:15.228806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:43.478 [2024-11-20 05:27:15.228823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:43.478 [2024-11-20 05:27:15.229033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.478 BaseBdev2 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.478 [ 00:16:43.478 { 00:16:43.478 "name": "BaseBdev2", 00:16:43.478 "aliases": [ 00:16:43.478 "ec87eec5-b2d6-45c2-864f-0772d415dca8" 00:16:43.478 ], 00:16:43.478 "product_name": "Malloc disk", 00:16:43.478 "block_size": 512, 00:16:43.478 "num_blocks": 65536, 00:16:43.478 "uuid": "ec87eec5-b2d6-45c2-864f-0772d415dca8", 00:16:43.478 "assigned_rate_limits": { 00:16:43.478 "rw_ios_per_sec": 0, 00:16:43.478 "rw_mbytes_per_sec": 0, 00:16:43.478 "r_mbytes_per_sec": 0, 00:16:43.478 "w_mbytes_per_sec": 0 00:16:43.478 }, 00:16:43.478 "claimed": true, 00:16:43.478 "claim_type": "exclusive_write", 00:16:43.478 "zoned": false, 00:16:43.478 "supported_io_types": { 00:16:43.478 "read": true, 00:16:43.478 "write": true, 00:16:43.478 "unmap": true, 00:16:43.478 "flush": true, 00:16:43.478 "reset": true, 00:16:43.478 "nvme_admin": false, 00:16:43.478 "nvme_io": false, 00:16:43.478 "nvme_io_md": false, 00:16:43.478 "write_zeroes": true, 00:16:43.478 "zcopy": true, 00:16:43.478 "get_zone_info": false, 00:16:43.478 "zone_management": false, 00:16:43.478 "zone_append": false, 00:16:43.478 "compare": false, 00:16:43.478 "compare_and_write": false, 00:16:43.478 "abort": true, 00:16:43.478 "seek_hole": false, 00:16:43.478 "seek_data": false, 00:16:43.478 "copy": true, 00:16:43.478 "nvme_iov_md": false 00:16:43.478 }, 00:16:43.478 "memory_domains": [ 00:16:43.478 { 00:16:43.478 "dma_device_id": "system", 00:16:43.478 "dma_device_type": 1 00:16:43.478 }, 00:16:43.478 { 00:16:43.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.478 "dma_device_type": 2 00:16:43.478 } 00:16:43.478 ], 00:16:43.478 "driver_specific": {} 00:16:43.478 } 00:16:43.478 ] 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.478 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.478 "name": "Existed_Raid", 00:16:43.478 "uuid": "a931fe0f-bac8-4e22-8f4b-85345b269c3f", 00:16:43.478 "strip_size_kb": 64, 00:16:43.478 "state": "online", 00:16:43.478 "raid_level": "concat", 00:16:43.478 "superblock": false, 00:16:43.479 "num_base_bdevs": 2, 00:16:43.479 "num_base_bdevs_discovered": 2, 00:16:43.479 "num_base_bdevs_operational": 2, 00:16:43.479 "base_bdevs_list": [ 00:16:43.479 { 00:16:43.479 "name": "BaseBdev1", 00:16:43.479 "uuid": "5fb5966d-2073-4152-959f-7a1b099464e4", 00:16:43.479 "is_configured": true, 00:16:43.479 "data_offset": 0, 00:16:43.479 "data_size": 65536 00:16:43.479 }, 00:16:43.479 { 00:16:43.479 "name": "BaseBdev2", 00:16:43.479 "uuid": "ec87eec5-b2d6-45c2-864f-0772d415dca8", 00:16:43.479 "is_configured": true, 00:16:43.479 "data_offset": 0, 00:16:43.479 "data_size": 65536 00:16:43.479 } 00:16:43.479 ] 00:16:43.479 }' 00:16:43.479 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.479 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.738 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.738 [2024-11-20 05:27:15.564676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.997 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.997 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:43.997 "name": "Existed_Raid", 00:16:43.997 "aliases": [ 00:16:43.997 "a931fe0f-bac8-4e22-8f4b-85345b269c3f" 00:16:43.997 ], 00:16:43.997 "product_name": "Raid Volume", 00:16:43.997 "block_size": 512, 00:16:43.997 "num_blocks": 131072, 00:16:43.997 "uuid": "a931fe0f-bac8-4e22-8f4b-85345b269c3f", 00:16:43.997 "assigned_rate_limits": { 00:16:43.997 "rw_ios_per_sec": 0, 00:16:43.997 "rw_mbytes_per_sec": 0, 00:16:43.997 "r_mbytes_per_sec": 0, 00:16:43.997 "w_mbytes_per_sec": 0 00:16:43.997 }, 00:16:43.997 "claimed": false, 00:16:43.997 "zoned": false, 00:16:43.997 "supported_io_types": { 00:16:43.997 "read": true, 00:16:43.997 "write": true, 00:16:43.997 "unmap": true, 00:16:43.997 "flush": true, 00:16:43.997 "reset": true, 00:16:43.997 "nvme_admin": false, 00:16:43.997 "nvme_io": false, 00:16:43.997 "nvme_io_md": false, 00:16:43.997 "write_zeroes": true, 00:16:43.997 "zcopy": false, 00:16:43.997 "get_zone_info": false, 00:16:43.997 "zone_management": false, 00:16:43.997 "zone_append": false, 00:16:43.997 "compare": false, 00:16:43.997 "compare_and_write": false, 00:16:43.997 "abort": false, 00:16:43.997 "seek_hole": false, 00:16:43.997 "seek_data": false, 00:16:43.997 "copy": false, 00:16:43.997 "nvme_iov_md": false 00:16:43.997 }, 00:16:43.997 "memory_domains": [ 00:16:43.997 { 00:16:43.997 "dma_device_id": "system", 00:16:43.997 "dma_device_type": 1 00:16:43.997 }, 00:16:43.997 { 00:16:43.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.997 "dma_device_type": 2 00:16:43.997 }, 00:16:43.997 { 00:16:43.997 "dma_device_id": "system", 00:16:43.997 "dma_device_type": 1 00:16:43.997 }, 00:16:43.997 { 00:16:43.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.997 "dma_device_type": 2 00:16:43.997 } 00:16:43.997 ], 00:16:43.997 "driver_specific": { 00:16:43.997 "raid": { 00:16:43.997 "uuid": "a931fe0f-bac8-4e22-8f4b-85345b269c3f", 00:16:43.997 "strip_size_kb": 64, 00:16:43.997 "state": "online", 00:16:43.997 "raid_level": "concat", 00:16:43.997 "superblock": false, 00:16:43.997 "num_base_bdevs": 2, 00:16:43.997 "num_base_bdevs_discovered": 2, 00:16:43.997 "num_base_bdevs_operational": 2, 00:16:43.997 "base_bdevs_list": [ 00:16:43.997 { 00:16:43.997 "name": "BaseBdev1", 00:16:43.997 "uuid": "5fb5966d-2073-4152-959f-7a1b099464e4", 00:16:43.997 "is_configured": true, 00:16:43.997 "data_offset": 0, 00:16:43.998 "data_size": 65536 00:16:43.998 }, 00:16:43.998 { 00:16:43.998 "name": "BaseBdev2", 00:16:43.998 "uuid": "ec87eec5-b2d6-45c2-864f-0772d415dca8", 00:16:43.998 "is_configured": true, 00:16:43.998 "data_offset": 0, 00:16:43.998 "data_size": 65536 00:16:43.998 } 00:16:43.998 ] 00:16:43.998 } 00:16:43.998 } 00:16:43.998 }' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:43.998 BaseBdev2' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.998 [2024-11-20 05:27:15.720497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:43.998 [2024-11-20 05:27:15.720541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.998 [2024-11-20 05:27:15.720591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.998 "name": "Existed_Raid", 00:16:43.998 "uuid": "a931fe0f-bac8-4e22-8f4b-85345b269c3f", 00:16:43.998 "strip_size_kb": 64, 00:16:43.998 "state": "offline", 00:16:43.998 "raid_level": "concat", 00:16:43.998 "superblock": false, 00:16:43.998 "num_base_bdevs": 2, 00:16:43.998 "num_base_bdevs_discovered": 1, 00:16:43.998 "num_base_bdevs_operational": 1, 00:16:43.998 "base_bdevs_list": [ 00:16:43.998 { 00:16:43.998 "name": null, 00:16:43.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.998 "is_configured": false, 00:16:43.998 "data_offset": 0, 00:16:43.998 "data_size": 65536 00:16:43.998 }, 00:16:43.998 { 00:16:43.998 "name": "BaseBdev2", 00:16:43.998 "uuid": "ec87eec5-b2d6-45c2-864f-0772d415dca8", 00:16:43.998 "is_configured": true, 00:16:43.998 "data_offset": 0, 00:16:43.998 "data_size": 65536 00:16:43.998 } 00:16:43.998 ] 00:16:43.998 }' 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.998 05:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.565 [2024-11-20 05:27:16.150454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:44.565 [2024-11-20 05:27:16.150518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60472 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60472 ']' 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60472 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60472 00:16:44.565 killing process with pid 60472 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60472' 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60472 00:16:44.565 [2024-11-20 05:27:16.257731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.565 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60472 00:16:44.565 [2024-11-20 05:27:16.266643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.157 ************************************ 00:16:45.157 END TEST raid_state_function_test 00:16:45.157 ************************************ 00:16:45.157 05:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:45.157 00:16:45.157 real 0m3.591s 00:16:45.157 user 0m5.206s 00:16:45.157 sys 0m0.600s 00:16:45.157 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:45.157 05:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.157 05:27:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:16:45.157 05:27:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:45.157 05:27:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:45.158 05:27:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.158 ************************************ 00:16:45.158 START TEST raid_state_function_test_sb 00:16:45.158 ************************************ 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:45.158 Process raid pid: 60708 00:16:45.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60708 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60708' 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60708 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60708 ']' 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:45.158 05:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.158 [2024-11-20 05:27:16.982679] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:45.158 [2024-11-20 05:27:16.982800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.418 [2024-11-20 05:27:17.139346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.418 [2024-11-20 05:27:17.245056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.677 [2024-11-20 05:27:17.369321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.677 [2024-11-20 05:27:17.369388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.241 05:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.242 [2024-11-20 05:27:17.828261] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.242 [2024-11-20 05:27:17.828498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.242 [2024-11-20 05:27:17.828598] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.242 [2024-11-20 05:27:17.828622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.242 "name": "Existed_Raid", 00:16:46.242 "uuid": "88d490be-0616-4800-975d-c5019fd06f28", 00:16:46.242 "strip_size_kb": 64, 00:16:46.242 "state": "configuring", 00:16:46.242 "raid_level": "concat", 00:16:46.242 "superblock": true, 00:16:46.242 "num_base_bdevs": 2, 00:16:46.242 "num_base_bdevs_discovered": 0, 00:16:46.242 "num_base_bdevs_operational": 2, 00:16:46.242 "base_bdevs_list": [ 00:16:46.242 { 00:16:46.242 "name": "BaseBdev1", 00:16:46.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.242 "is_configured": false, 00:16:46.242 "data_offset": 0, 00:16:46.242 "data_size": 0 00:16:46.242 }, 00:16:46.242 { 00:16:46.242 "name": "BaseBdev2", 00:16:46.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.242 "is_configured": false, 00:16:46.242 "data_offset": 0, 00:16:46.242 "data_size": 0 00:16:46.242 } 00:16:46.242 ] 00:16:46.242 }' 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.242 05:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.499 [2024-11-20 05:27:18.140275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.499 [2024-11-20 05:27:18.140325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.499 [2024-11-20 05:27:18.148267] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.499 [2024-11-20 05:27:18.148313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.499 [2024-11-20 05:27:18.148322] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.499 [2024-11-20 05:27:18.148332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.499 [2024-11-20 05:27:18.178744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.499 BaseBdev1 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.499 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.499 [ 00:16:46.499 { 00:16:46.499 "name": "BaseBdev1", 00:16:46.499 "aliases": [ 00:16:46.499 "ab9fb43b-c16b-4ffe-81ae-7b8441c67bf6" 00:16:46.499 ], 00:16:46.499 "product_name": "Malloc disk", 00:16:46.499 "block_size": 512, 00:16:46.499 "num_blocks": 65536, 00:16:46.499 "uuid": "ab9fb43b-c16b-4ffe-81ae-7b8441c67bf6", 00:16:46.499 "assigned_rate_limits": { 00:16:46.499 "rw_ios_per_sec": 0, 00:16:46.499 "rw_mbytes_per_sec": 0, 00:16:46.499 "r_mbytes_per_sec": 0, 00:16:46.499 "w_mbytes_per_sec": 0 00:16:46.499 }, 00:16:46.499 "claimed": true, 00:16:46.499 "claim_type": "exclusive_write", 00:16:46.499 "zoned": false, 00:16:46.499 "supported_io_types": { 00:16:46.499 "read": true, 00:16:46.499 "write": true, 00:16:46.499 "unmap": true, 00:16:46.499 "flush": true, 00:16:46.499 "reset": true, 00:16:46.500 "nvme_admin": false, 00:16:46.500 "nvme_io": false, 00:16:46.500 "nvme_io_md": false, 00:16:46.500 "write_zeroes": true, 00:16:46.500 "zcopy": true, 00:16:46.500 "get_zone_info": false, 00:16:46.500 "zone_management": false, 00:16:46.500 "zone_append": false, 00:16:46.500 "compare": false, 00:16:46.500 "compare_and_write": false, 00:16:46.500 "abort": true, 00:16:46.500 "seek_hole": false, 00:16:46.500 "seek_data": false, 00:16:46.500 "copy": true, 00:16:46.500 "nvme_iov_md": false 00:16:46.500 }, 00:16:46.500 "memory_domains": [ 00:16:46.500 { 00:16:46.500 "dma_device_id": "system", 00:16:46.500 "dma_device_type": 1 00:16:46.500 }, 00:16:46.500 { 00:16:46.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.500 "dma_device_type": 2 00:16:46.500 } 00:16:46.500 ], 00:16:46.500 "driver_specific": {} 00:16:46.500 } 00:16:46.500 ] 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.500 "name": "Existed_Raid", 00:16:46.500 "uuid": "04c2ce8f-d561-4727-b88c-ec6aaca4e8b8", 00:16:46.500 "strip_size_kb": 64, 00:16:46.500 "state": "configuring", 00:16:46.500 "raid_level": "concat", 00:16:46.500 "superblock": true, 00:16:46.500 "num_base_bdevs": 2, 00:16:46.500 "num_base_bdevs_discovered": 1, 00:16:46.500 "num_base_bdevs_operational": 2, 00:16:46.500 "base_bdevs_list": [ 00:16:46.500 { 00:16:46.500 "name": "BaseBdev1", 00:16:46.500 "uuid": "ab9fb43b-c16b-4ffe-81ae-7b8441c67bf6", 00:16:46.500 "is_configured": true, 00:16:46.500 "data_offset": 2048, 00:16:46.500 "data_size": 63488 00:16:46.500 }, 00:16:46.500 { 00:16:46.500 "name": "BaseBdev2", 00:16:46.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.500 "is_configured": false, 00:16:46.500 "data_offset": 0, 00:16:46.500 "data_size": 0 00:16:46.500 } 00:16:46.500 ] 00:16:46.500 }' 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.500 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 [2024-11-20 05:27:18.510859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.758 [2024-11-20 05:27:18.511101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 [2024-11-20 05:27:18.518900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.758 [2024-11-20 05:27:18.520689] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.758 [2024-11-20 05:27:18.520811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.758 "name": "Existed_Raid", 00:16:46.758 "uuid": "f5587b90-34f8-42b0-a1c6-044f6ab14794", 00:16:46.758 "strip_size_kb": 64, 00:16:46.758 "state": "configuring", 00:16:46.758 "raid_level": "concat", 00:16:46.758 "superblock": true, 00:16:46.758 "num_base_bdevs": 2, 00:16:46.758 "num_base_bdevs_discovered": 1, 00:16:46.758 "num_base_bdevs_operational": 2, 00:16:46.758 "base_bdevs_list": [ 00:16:46.758 { 00:16:46.758 "name": "BaseBdev1", 00:16:46.758 "uuid": "ab9fb43b-c16b-4ffe-81ae-7b8441c67bf6", 00:16:46.758 "is_configured": true, 00:16:46.758 "data_offset": 2048, 00:16:46.758 "data_size": 63488 00:16:46.758 }, 00:16:46.758 { 00:16:46.758 "name": "BaseBdev2", 00:16:46.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.758 "is_configured": false, 00:16:46.758 "data_offset": 0, 00:16:46.758 "data_size": 0 00:16:46.758 } 00:16:46.758 ] 00:16:46.758 }' 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.758 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.015 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:47.016 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.016 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.274 [2024-11-20 05:27:18.851693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.274 [2024-11-20 05:27:18.851956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:47.274 [2024-11-20 05:27:18.851970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:47.274 BaseBdev2 00:16:47.274 [2024-11-20 05:27:18.852207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:47.274 [2024-11-20 05:27:18.852328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:47.274 [2024-11-20 05:27:18.852337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:47.274 [2024-11-20 05:27:18.852465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.274 [ 00:16:47.274 { 00:16:47.274 "name": "BaseBdev2", 00:16:47.274 "aliases": [ 00:16:47.274 "875e8693-5cf6-49b5-aa47-525edb0fb6e5" 00:16:47.274 ], 00:16:47.274 "product_name": "Malloc disk", 00:16:47.274 "block_size": 512, 00:16:47.274 "num_blocks": 65536, 00:16:47.274 "uuid": "875e8693-5cf6-49b5-aa47-525edb0fb6e5", 00:16:47.274 "assigned_rate_limits": { 00:16:47.274 "rw_ios_per_sec": 0, 00:16:47.274 "rw_mbytes_per_sec": 0, 00:16:47.274 "r_mbytes_per_sec": 0, 00:16:47.274 "w_mbytes_per_sec": 0 00:16:47.274 }, 00:16:47.274 "claimed": true, 00:16:47.274 "claim_type": "exclusive_write", 00:16:47.274 "zoned": false, 00:16:47.274 "supported_io_types": { 00:16:47.274 "read": true, 00:16:47.274 "write": true, 00:16:47.274 "unmap": true, 00:16:47.274 "flush": true, 00:16:47.274 "reset": true, 00:16:47.274 "nvme_admin": false, 00:16:47.274 "nvme_io": false, 00:16:47.274 "nvme_io_md": false, 00:16:47.274 "write_zeroes": true, 00:16:47.274 "zcopy": true, 00:16:47.274 "get_zone_info": false, 00:16:47.274 "zone_management": false, 00:16:47.274 "zone_append": false, 00:16:47.274 "compare": false, 00:16:47.274 "compare_and_write": false, 00:16:47.274 "abort": true, 00:16:47.274 "seek_hole": false, 00:16:47.274 "seek_data": false, 00:16:47.274 "copy": true, 00:16:47.274 "nvme_iov_md": false 00:16:47.274 }, 00:16:47.274 "memory_domains": [ 00:16:47.274 { 00:16:47.274 "dma_device_id": "system", 00:16:47.274 "dma_device_type": 1 00:16:47.274 }, 00:16:47.274 { 00:16:47.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.274 "dma_device_type": 2 00:16:47.274 } 00:16:47.274 ], 00:16:47.274 "driver_specific": {} 00:16:47.274 } 00:16:47.274 ] 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.274 "name": "Existed_Raid", 00:16:47.274 "uuid": "f5587b90-34f8-42b0-a1c6-044f6ab14794", 00:16:47.274 "strip_size_kb": 64, 00:16:47.274 "state": "online", 00:16:47.274 "raid_level": "concat", 00:16:47.274 "superblock": true, 00:16:47.274 "num_base_bdevs": 2, 00:16:47.274 "num_base_bdevs_discovered": 2, 00:16:47.274 "num_base_bdevs_operational": 2, 00:16:47.274 "base_bdevs_list": [ 00:16:47.274 { 00:16:47.274 "name": "BaseBdev1", 00:16:47.274 "uuid": "ab9fb43b-c16b-4ffe-81ae-7b8441c67bf6", 00:16:47.274 "is_configured": true, 00:16:47.274 "data_offset": 2048, 00:16:47.274 "data_size": 63488 00:16:47.274 }, 00:16:47.274 { 00:16:47.274 "name": "BaseBdev2", 00:16:47.274 "uuid": "875e8693-5cf6-49b5-aa47-525edb0fb6e5", 00:16:47.274 "is_configured": true, 00:16:47.274 "data_offset": 2048, 00:16:47.274 "data_size": 63488 00:16:47.274 } 00:16:47.274 ] 00:16:47.274 }' 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.274 05:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.534 [2024-11-20 05:27:19.204090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.534 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.534 "name": "Existed_Raid", 00:16:47.534 "aliases": [ 00:16:47.534 "f5587b90-34f8-42b0-a1c6-044f6ab14794" 00:16:47.534 ], 00:16:47.534 "product_name": "Raid Volume", 00:16:47.534 "block_size": 512, 00:16:47.534 "num_blocks": 126976, 00:16:47.534 "uuid": "f5587b90-34f8-42b0-a1c6-044f6ab14794", 00:16:47.534 "assigned_rate_limits": { 00:16:47.534 "rw_ios_per_sec": 0, 00:16:47.534 "rw_mbytes_per_sec": 0, 00:16:47.534 "r_mbytes_per_sec": 0, 00:16:47.534 "w_mbytes_per_sec": 0 00:16:47.534 }, 00:16:47.534 "claimed": false, 00:16:47.534 "zoned": false, 00:16:47.534 "supported_io_types": { 00:16:47.534 "read": true, 00:16:47.534 "write": true, 00:16:47.534 "unmap": true, 00:16:47.534 "flush": true, 00:16:47.534 "reset": true, 00:16:47.534 "nvme_admin": false, 00:16:47.534 "nvme_io": false, 00:16:47.534 "nvme_io_md": false, 00:16:47.534 "write_zeroes": true, 00:16:47.534 "zcopy": false, 00:16:47.534 "get_zone_info": false, 00:16:47.534 "zone_management": false, 00:16:47.534 "zone_append": false, 00:16:47.534 "compare": false, 00:16:47.534 "compare_and_write": false, 00:16:47.534 "abort": false, 00:16:47.534 "seek_hole": false, 00:16:47.534 "seek_data": false, 00:16:47.534 "copy": false, 00:16:47.534 "nvme_iov_md": false 00:16:47.534 }, 00:16:47.534 "memory_domains": [ 00:16:47.534 { 00:16:47.534 "dma_device_id": "system", 00:16:47.534 "dma_device_type": 1 00:16:47.534 }, 00:16:47.534 { 00:16:47.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.534 "dma_device_type": 2 00:16:47.534 }, 00:16:47.534 { 00:16:47.534 "dma_device_id": "system", 00:16:47.534 "dma_device_type": 1 00:16:47.534 }, 00:16:47.534 { 00:16:47.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.534 "dma_device_type": 2 00:16:47.534 } 00:16:47.534 ], 00:16:47.534 "driver_specific": { 00:16:47.534 "raid": { 00:16:47.534 "uuid": "f5587b90-34f8-42b0-a1c6-044f6ab14794", 00:16:47.534 "strip_size_kb": 64, 00:16:47.534 "state": "online", 00:16:47.534 "raid_level": "concat", 00:16:47.534 "superblock": true, 00:16:47.534 "num_base_bdevs": 2, 00:16:47.534 "num_base_bdevs_discovered": 2, 00:16:47.534 "num_base_bdevs_operational": 2, 00:16:47.534 "base_bdevs_list": [ 00:16:47.534 { 00:16:47.534 "name": "BaseBdev1", 00:16:47.534 "uuid": "ab9fb43b-c16b-4ffe-81ae-7b8441c67bf6", 00:16:47.534 "is_configured": true, 00:16:47.534 "data_offset": 2048, 00:16:47.534 "data_size": 63488 00:16:47.535 }, 00:16:47.535 { 00:16:47.535 "name": "BaseBdev2", 00:16:47.535 "uuid": "875e8693-5cf6-49b5-aa47-525edb0fb6e5", 00:16:47.535 "is_configured": true, 00:16:47.535 "data_offset": 2048, 00:16:47.535 "data_size": 63488 00:16:47.535 } 00:16:47.535 ] 00:16:47.535 } 00:16:47.535 } 00:16:47.535 }' 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:47.535 BaseBdev2' 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.535 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.535 [2024-11-20 05:27:19.343911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.535 [2024-11-20 05:27:19.343964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.535 [2024-11-20 05:27:19.344016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.795 "name": "Existed_Raid", 00:16:47.795 "uuid": "f5587b90-34f8-42b0-a1c6-044f6ab14794", 00:16:47.795 "strip_size_kb": 64, 00:16:47.795 "state": "offline", 00:16:47.795 "raid_level": "concat", 00:16:47.795 "superblock": true, 00:16:47.795 "num_base_bdevs": 2, 00:16:47.795 "num_base_bdevs_discovered": 1, 00:16:47.795 "num_base_bdevs_operational": 1, 00:16:47.795 "base_bdevs_list": [ 00:16:47.795 { 00:16:47.795 "name": null, 00:16:47.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.795 "is_configured": false, 00:16:47.795 "data_offset": 0, 00:16:47.795 "data_size": 63488 00:16:47.795 }, 00:16:47.795 { 00:16:47.795 "name": "BaseBdev2", 00:16:47.795 "uuid": "875e8693-5cf6-49b5-aa47-525edb0fb6e5", 00:16:47.795 "is_configured": true, 00:16:47.795 "data_offset": 2048, 00:16:47.795 "data_size": 63488 00:16:47.795 } 00:16:47.795 ] 00:16:47.795 }' 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.795 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.053 [2024-11-20 05:27:19.753998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:48.053 [2024-11-20 05:27:19.754221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:48.053 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60708 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60708 ']' 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60708 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60708 00:16:48.054 killing process with pid 60708 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60708' 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60708 00:16:48.054 [2024-11-20 05:27:19.860925] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.054 05:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60708 00:16:48.054 [2024-11-20 05:27:19.869771] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.988 ************************************ 00:16:48.988 END TEST raid_state_function_test_sb 00:16:48.988 ************************************ 00:16:48.988 05:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:48.988 00:16:48.988 real 0m3.568s 00:16:48.988 user 0m5.145s 00:16:48.988 sys 0m0.592s 00:16:48.988 05:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:48.988 05:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.988 05:27:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:16:48.988 05:27:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:48.988 05:27:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:48.988 05:27:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.988 ************************************ 00:16:48.988 START TEST raid_superblock_test 00:16:48.988 ************************************ 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60944 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60944 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60944 ']' 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:48.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.988 05:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:48.988 [2024-11-20 05:27:20.603007] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:48.988 [2024-11-20 05:27:20.603157] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60944 ] 00:16:48.988 [2024-11-20 05:27:20.760833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.247 [2024-11-20 05:27:20.880577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.247 [2024-11-20 05:27:21.029972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.247 [2024-11-20 05:27:21.030023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.815 malloc1 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.815 [2024-11-20 05:27:21.452861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.815 [2024-11-20 05:27:21.453109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.815 [2024-11-20 05:27:21.453142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:49.815 [2024-11-20 05:27:21.453152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.815 [2024-11-20 05:27:21.455508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.815 [2024-11-20 05:27:21.455549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.815 pt1 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.815 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.816 malloc2 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.816 [2024-11-20 05:27:21.491153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.816 [2024-11-20 05:27:21.491234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.816 [2024-11-20 05:27:21.491261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:49.816 [2024-11-20 05:27:21.491271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.816 [2024-11-20 05:27:21.493642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.816 [2024-11-20 05:27:21.493687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.816 pt2 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.816 [2024-11-20 05:27:21.499211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.816 [2024-11-20 05:27:21.501450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.816 [2024-11-20 05:27:21.501626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:49.816 [2024-11-20 05:27:21.501637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:49.816 [2024-11-20 05:27:21.501928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:49.816 [2024-11-20 05:27:21.502079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:49.816 [2024-11-20 05:27:21.502090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:49.816 [2024-11-20 05:27:21.502254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.816 "name": "raid_bdev1", 00:16:49.816 "uuid": "9fb3cb87-c968-43bd-86f6-35fe5dc0d932", 00:16:49.816 "strip_size_kb": 64, 00:16:49.816 "state": "online", 00:16:49.816 "raid_level": "concat", 00:16:49.816 "superblock": true, 00:16:49.816 "num_base_bdevs": 2, 00:16:49.816 "num_base_bdevs_discovered": 2, 00:16:49.816 "num_base_bdevs_operational": 2, 00:16:49.816 "base_bdevs_list": [ 00:16:49.816 { 00:16:49.816 "name": "pt1", 00:16:49.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.816 "is_configured": true, 00:16:49.816 "data_offset": 2048, 00:16:49.816 "data_size": 63488 00:16:49.816 }, 00:16:49.816 { 00:16:49.816 "name": "pt2", 00:16:49.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.816 "is_configured": true, 00:16:49.816 "data_offset": 2048, 00:16:49.816 "data_size": 63488 00:16:49.816 } 00:16:49.816 ] 00:16:49.816 }' 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.816 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.076 [2024-11-20 05:27:21.807571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.076 "name": "raid_bdev1", 00:16:50.076 "aliases": [ 00:16:50.076 "9fb3cb87-c968-43bd-86f6-35fe5dc0d932" 00:16:50.076 ], 00:16:50.076 "product_name": "Raid Volume", 00:16:50.076 "block_size": 512, 00:16:50.076 "num_blocks": 126976, 00:16:50.076 "uuid": "9fb3cb87-c968-43bd-86f6-35fe5dc0d932", 00:16:50.076 "assigned_rate_limits": { 00:16:50.076 "rw_ios_per_sec": 0, 00:16:50.076 "rw_mbytes_per_sec": 0, 00:16:50.076 "r_mbytes_per_sec": 0, 00:16:50.076 "w_mbytes_per_sec": 0 00:16:50.076 }, 00:16:50.076 "claimed": false, 00:16:50.076 "zoned": false, 00:16:50.076 "supported_io_types": { 00:16:50.076 "read": true, 00:16:50.076 "write": true, 00:16:50.076 "unmap": true, 00:16:50.076 "flush": true, 00:16:50.076 "reset": true, 00:16:50.076 "nvme_admin": false, 00:16:50.076 "nvme_io": false, 00:16:50.076 "nvme_io_md": false, 00:16:50.076 "write_zeroes": true, 00:16:50.076 "zcopy": false, 00:16:50.076 "get_zone_info": false, 00:16:50.076 "zone_management": false, 00:16:50.076 "zone_append": false, 00:16:50.076 "compare": false, 00:16:50.076 "compare_and_write": false, 00:16:50.076 "abort": false, 00:16:50.076 "seek_hole": false, 00:16:50.076 "seek_data": false, 00:16:50.076 "copy": false, 00:16:50.076 "nvme_iov_md": false 00:16:50.076 }, 00:16:50.076 "memory_domains": [ 00:16:50.076 { 00:16:50.076 "dma_device_id": "system", 00:16:50.076 "dma_device_type": 1 00:16:50.076 }, 00:16:50.076 { 00:16:50.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.076 "dma_device_type": 2 00:16:50.076 }, 00:16:50.076 { 00:16:50.076 "dma_device_id": "system", 00:16:50.076 "dma_device_type": 1 00:16:50.076 }, 00:16:50.076 { 00:16:50.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.076 "dma_device_type": 2 00:16:50.076 } 00:16:50.076 ], 00:16:50.076 "driver_specific": { 00:16:50.076 "raid": { 00:16:50.076 "uuid": "9fb3cb87-c968-43bd-86f6-35fe5dc0d932", 00:16:50.076 "strip_size_kb": 64, 00:16:50.076 "state": "online", 00:16:50.076 "raid_level": "concat", 00:16:50.076 "superblock": true, 00:16:50.076 "num_base_bdevs": 2, 00:16:50.076 "num_base_bdevs_discovered": 2, 00:16:50.076 "num_base_bdevs_operational": 2, 00:16:50.076 "base_bdevs_list": [ 00:16:50.076 { 00:16:50.076 "name": "pt1", 00:16:50.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.076 "is_configured": true, 00:16:50.076 "data_offset": 2048, 00:16:50.076 "data_size": 63488 00:16:50.076 }, 00:16:50.076 { 00:16:50.076 "name": "pt2", 00:16:50.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.076 "is_configured": true, 00:16:50.076 "data_offset": 2048, 00:16:50.076 "data_size": 63488 00:16:50.076 } 00:16:50.076 ] 00:16:50.076 } 00:16:50.076 } 00:16:50.076 }' 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:50.076 pt2' 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.076 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 [2024-11-20 05:27:21.971622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9fb3cb87-c968-43bd-86f6-35fe5dc0d932 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9fb3cb87-c968-43bd-86f6-35fe5dc0d932 ']' 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 [2024-11-20 05:27:22.003261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.345 [2024-11-20 05:27:22.003306] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.345 [2024-11-20 05:27:22.003409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.345 [2024-11-20 05:27:22.003466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.345 [2024-11-20 05:27:22.003478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 [2024-11-20 05:27:22.099334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:50.345 [2024-11-20 05:27:22.101445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:50.345 [2024-11-20 05:27:22.101689] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:50.345 [2024-11-20 05:27:22.101754] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:50.345 [2024-11-20 05:27:22.101769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.345 [2024-11-20 05:27:22.101780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:50.345 request: 00:16:50.345 { 00:16:50.345 "name": "raid_bdev1", 00:16:50.345 "raid_level": "concat", 00:16:50.345 "base_bdevs": [ 00:16:50.345 "malloc1", 00:16:50.345 "malloc2" 00:16:50.345 ], 00:16:50.345 "strip_size_kb": 64, 00:16:50.345 "superblock": false, 00:16:50.345 "method": "bdev_raid_create", 00:16:50.345 "req_id": 1 00:16:50.345 } 00:16:50.345 Got JSON-RPC error response 00:16:50.345 response: 00:16:50.345 { 00:16:50.345 "code": -17, 00:16:50.345 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:50.345 } 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 [2024-11-20 05:27:22.147349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:50.345 [2024-11-20 05:27:22.147591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.345 [2024-11-20 05:27:22.147633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:50.345 [2024-11-20 05:27:22.147698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.345 [2024-11-20 05:27:22.150081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.345 [2024-11-20 05:27:22.150222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:50.345 [2024-11-20 05:27:22.150377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:50.345 [2024-11-20 05:27:22.150503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:50.345 pt1 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.345 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.346 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.346 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.346 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.346 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.346 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.346 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.346 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.346 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.346 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.603 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.603 "name": "raid_bdev1", 00:16:50.603 "uuid": "9fb3cb87-c968-43bd-86f6-35fe5dc0d932", 00:16:50.603 "strip_size_kb": 64, 00:16:50.603 "state": "configuring", 00:16:50.603 "raid_level": "concat", 00:16:50.603 "superblock": true, 00:16:50.603 "num_base_bdevs": 2, 00:16:50.603 "num_base_bdevs_discovered": 1, 00:16:50.603 "num_base_bdevs_operational": 2, 00:16:50.603 "base_bdevs_list": [ 00:16:50.603 { 00:16:50.603 "name": "pt1", 00:16:50.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.603 "is_configured": true, 00:16:50.603 "data_offset": 2048, 00:16:50.603 "data_size": 63488 00:16:50.603 }, 00:16:50.603 { 00:16:50.603 "name": null, 00:16:50.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.603 "is_configured": false, 00:16:50.603 "data_offset": 2048, 00:16:50.603 "data_size": 63488 00:16:50.603 } 00:16:50.603 ] 00:16:50.603 }' 00:16:50.603 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.603 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.861 [2024-11-20 05:27:22.455426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.861 [2024-11-20 05:27:22.455513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.861 [2024-11-20 05:27:22.455537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:50.861 [2024-11-20 05:27:22.455549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.861 [2024-11-20 05:27:22.456052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.861 [2024-11-20 05:27:22.456078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.861 [2024-11-20 05:27:22.456162] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:50.861 [2024-11-20 05:27:22.456187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.861 [2024-11-20 05:27:22.456298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:50.861 [2024-11-20 05:27:22.456311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:50.861 [2024-11-20 05:27:22.456570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:50.861 [2024-11-20 05:27:22.456914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:50.861 [2024-11-20 05:27:22.456928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:50.861 [2024-11-20 05:27:22.457075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.861 pt2 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.861 "name": "raid_bdev1", 00:16:50.861 "uuid": "9fb3cb87-c968-43bd-86f6-35fe5dc0d932", 00:16:50.861 "strip_size_kb": 64, 00:16:50.861 "state": "online", 00:16:50.861 "raid_level": "concat", 00:16:50.861 "superblock": true, 00:16:50.861 "num_base_bdevs": 2, 00:16:50.861 "num_base_bdevs_discovered": 2, 00:16:50.861 "num_base_bdevs_operational": 2, 00:16:50.861 "base_bdevs_list": [ 00:16:50.861 { 00:16:50.861 "name": "pt1", 00:16:50.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.861 "is_configured": true, 00:16:50.861 "data_offset": 2048, 00:16:50.861 "data_size": 63488 00:16:50.861 }, 00:16:50.861 { 00:16:50.861 "name": "pt2", 00:16:50.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.861 "is_configured": true, 00:16:50.861 "data_offset": 2048, 00:16:50.861 "data_size": 63488 00:16:50.861 } 00:16:50.861 ] 00:16:50.861 }' 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.861 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.119 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:51.119 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:51.119 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.119 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.120 [2024-11-20 05:27:22.791767] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.120 "name": "raid_bdev1", 00:16:51.120 "aliases": [ 00:16:51.120 "9fb3cb87-c968-43bd-86f6-35fe5dc0d932" 00:16:51.120 ], 00:16:51.120 "product_name": "Raid Volume", 00:16:51.120 "block_size": 512, 00:16:51.120 "num_blocks": 126976, 00:16:51.120 "uuid": "9fb3cb87-c968-43bd-86f6-35fe5dc0d932", 00:16:51.120 "assigned_rate_limits": { 00:16:51.120 "rw_ios_per_sec": 0, 00:16:51.120 "rw_mbytes_per_sec": 0, 00:16:51.120 "r_mbytes_per_sec": 0, 00:16:51.120 "w_mbytes_per_sec": 0 00:16:51.120 }, 00:16:51.120 "claimed": false, 00:16:51.120 "zoned": false, 00:16:51.120 "supported_io_types": { 00:16:51.120 "read": true, 00:16:51.120 "write": true, 00:16:51.120 "unmap": true, 00:16:51.120 "flush": true, 00:16:51.120 "reset": true, 00:16:51.120 "nvme_admin": false, 00:16:51.120 "nvme_io": false, 00:16:51.120 "nvme_io_md": false, 00:16:51.120 "write_zeroes": true, 00:16:51.120 "zcopy": false, 00:16:51.120 "get_zone_info": false, 00:16:51.120 "zone_management": false, 00:16:51.120 "zone_append": false, 00:16:51.120 "compare": false, 00:16:51.120 "compare_and_write": false, 00:16:51.120 "abort": false, 00:16:51.120 "seek_hole": false, 00:16:51.120 "seek_data": false, 00:16:51.120 "copy": false, 00:16:51.120 "nvme_iov_md": false 00:16:51.120 }, 00:16:51.120 "memory_domains": [ 00:16:51.120 { 00:16:51.120 "dma_device_id": "system", 00:16:51.120 "dma_device_type": 1 00:16:51.120 }, 00:16:51.120 { 00:16:51.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.120 "dma_device_type": 2 00:16:51.120 }, 00:16:51.120 { 00:16:51.120 "dma_device_id": "system", 00:16:51.120 "dma_device_type": 1 00:16:51.120 }, 00:16:51.120 { 00:16:51.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.120 "dma_device_type": 2 00:16:51.120 } 00:16:51.120 ], 00:16:51.120 "driver_specific": { 00:16:51.120 "raid": { 00:16:51.120 "uuid": "9fb3cb87-c968-43bd-86f6-35fe5dc0d932", 00:16:51.120 "strip_size_kb": 64, 00:16:51.120 "state": "online", 00:16:51.120 "raid_level": "concat", 00:16:51.120 "superblock": true, 00:16:51.120 "num_base_bdevs": 2, 00:16:51.120 "num_base_bdevs_discovered": 2, 00:16:51.120 "num_base_bdevs_operational": 2, 00:16:51.120 "base_bdevs_list": [ 00:16:51.120 { 00:16:51.120 "name": "pt1", 00:16:51.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.120 "is_configured": true, 00:16:51.120 "data_offset": 2048, 00:16:51.120 "data_size": 63488 00:16:51.120 }, 00:16:51.120 { 00:16:51.120 "name": "pt2", 00:16:51.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.120 "is_configured": true, 00:16:51.120 "data_offset": 2048, 00:16:51.120 "data_size": 63488 00:16:51.120 } 00:16:51.120 ] 00:16:51.120 } 00:16:51.120 } 00:16:51.120 }' 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:51.120 pt2' 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.120 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.378 [2024-11-20 05:27:22.967812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9fb3cb87-c968-43bd-86f6-35fe5dc0d932 '!=' 9fb3cb87-c968-43bd-86f6-35fe5dc0d932 ']' 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60944 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60944 ']' 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60944 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:51.378 05:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60944 00:16:51.378 killing process with pid 60944 00:16:51.378 05:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:51.378 05:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:51.378 05:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60944' 00:16:51.378 05:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 60944 00:16:51.378 [2024-11-20 05:27:23.023629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.378 05:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 60944 00:16:51.378 [2024-11-20 05:27:23.023749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.378 [2024-11-20 05:27:23.023815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.378 [2024-11-20 05:27:23.023829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:51.378 [2024-11-20 05:27:23.160905] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.312 05:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:52.312 00:16:52.312 real 0m3.301s 00:16:52.312 user 0m4.571s 00:16:52.312 sys 0m0.577s 00:16:52.312 ************************************ 00:16:52.312 END TEST raid_superblock_test 00:16:52.312 ************************************ 00:16:52.312 05:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:52.312 05:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.312 05:27:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:16:52.312 05:27:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:52.312 05:27:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:52.312 05:27:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.312 ************************************ 00:16:52.312 START TEST raid_read_error_test 00:16:52.312 ************************************ 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:52.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oaatXWBRSF 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61139 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61139 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61139 ']' 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:52.312 05:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.312 [2024-11-20 05:27:23.959656] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:52.312 [2024-11-20 05:27:23.960036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61139 ] 00:16:52.312 [2024-11-20 05:27:24.122319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.571 [2024-11-20 05:27:24.226921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.571 [2024-11-20 05:27:24.350098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.571 [2024-11-20 05:27:24.350173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.219 BaseBdev1_malloc 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.219 true 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.219 [2024-11-20 05:27:24.803286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:53.219 [2024-11-20 05:27:24.803553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.219 [2024-11-20 05:27:24.803582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:53.219 [2024-11-20 05:27:24.803592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.219 [2024-11-20 05:27:24.805647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.219 [2024-11-20 05:27:24.805683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:53.219 BaseBdev1 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.219 BaseBdev2_malloc 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.219 true 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.219 [2024-11-20 05:27:24.845510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:53.219 [2024-11-20 05:27:24.845581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.219 [2024-11-20 05:27:24.845599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:53.219 [2024-11-20 05:27:24.845609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.219 [2024-11-20 05:27:24.847640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.219 [2024-11-20 05:27:24.847680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:53.219 BaseBdev2 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.219 [2024-11-20 05:27:24.853573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.219 [2024-11-20 05:27:24.855324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.219 [2024-11-20 05:27:24.855708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:53.219 [2024-11-20 05:27:24.855725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:53.219 [2024-11-20 05:27:24.855979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:53.219 [2024-11-20 05:27:24.856119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:53.219 [2024-11-20 05:27:24.856129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:53.219 [2024-11-20 05:27:24.856278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.219 "name": "raid_bdev1", 00:16:53.219 "uuid": "f0617c21-f3d5-49c1-90da-53875dc9130a", 00:16:53.219 "strip_size_kb": 64, 00:16:53.219 "state": "online", 00:16:53.219 "raid_level": "concat", 00:16:53.219 "superblock": true, 00:16:53.219 "num_base_bdevs": 2, 00:16:53.219 "num_base_bdevs_discovered": 2, 00:16:53.219 "num_base_bdevs_operational": 2, 00:16:53.219 "base_bdevs_list": [ 00:16:53.219 { 00:16:53.219 "name": "BaseBdev1", 00:16:53.219 "uuid": "97a65222-9dbb-5107-b35b-fcb867f3a4b4", 00:16:53.219 "is_configured": true, 00:16:53.219 "data_offset": 2048, 00:16:53.219 "data_size": 63488 00:16:53.219 }, 00:16:53.219 { 00:16:53.219 "name": "BaseBdev2", 00:16:53.219 "uuid": "36d885df-9245-52fa-b82e-34380f4ca276", 00:16:53.219 "is_configured": true, 00:16:53.219 "data_offset": 2048, 00:16:53.219 "data_size": 63488 00:16:53.219 } 00:16:53.219 ] 00:16:53.219 }' 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.219 05:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.477 05:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:53.478 05:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:53.478 [2024-11-20 05:27:25.258501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.412 "name": "raid_bdev1", 00:16:54.412 "uuid": "f0617c21-f3d5-49c1-90da-53875dc9130a", 00:16:54.412 "strip_size_kb": 64, 00:16:54.412 "state": "online", 00:16:54.412 "raid_level": "concat", 00:16:54.412 "superblock": true, 00:16:54.412 "num_base_bdevs": 2, 00:16:54.412 "num_base_bdevs_discovered": 2, 00:16:54.412 "num_base_bdevs_operational": 2, 00:16:54.412 "base_bdevs_list": [ 00:16:54.412 { 00:16:54.412 "name": "BaseBdev1", 00:16:54.412 "uuid": "97a65222-9dbb-5107-b35b-fcb867f3a4b4", 00:16:54.412 "is_configured": true, 00:16:54.412 "data_offset": 2048, 00:16:54.412 "data_size": 63488 00:16:54.412 }, 00:16:54.412 { 00:16:54.412 "name": "BaseBdev2", 00:16:54.412 "uuid": "36d885df-9245-52fa-b82e-34380f4ca276", 00:16:54.412 "is_configured": true, 00:16:54.412 "data_offset": 2048, 00:16:54.412 "data_size": 63488 00:16:54.412 } 00:16:54.412 ] 00:16:54.412 }' 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.412 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.671 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.671 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.671 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.671 [2024-11-20 05:27:26.491888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.671 [2024-11-20 05:27:26.491936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.671 [2024-11-20 05:27:26.494432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.671 [2024-11-20 05:27:26.494479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.671 [2024-11-20 05:27:26.494510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.671 [2024-11-20 05:27:26.494522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:54.671 { 00:16:54.671 "results": [ 00:16:54.671 { 00:16:54.671 "job": "raid_bdev1", 00:16:54.671 "core_mask": "0x1", 00:16:54.671 "workload": "randrw", 00:16:54.671 "percentage": 50, 00:16:54.671 "status": "finished", 00:16:54.671 "queue_depth": 1, 00:16:54.671 "io_size": 131072, 00:16:54.671 "runtime": 1.231487, 00:16:54.671 "iops": 16835.744104485064, 00:16:54.671 "mibps": 2104.468013060633, 00:16:54.671 "io_failed": 1, 00:16:54.671 "io_timeout": 0, 00:16:54.671 "avg_latency_us": 82.186906085137, 00:16:54.671 "min_latency_us": 25.206153846153846, 00:16:54.671 "max_latency_us": 1373.7353846153846 00:16:54.671 } 00:16:54.671 ], 00:16:54.671 "core_count": 1 00:16:54.671 } 00:16:54.671 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.671 05:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61139 00:16:54.671 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61139 ']' 00:16:54.671 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61139 00:16:54.671 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:16:54.671 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:54.930 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61139 00:16:54.930 killing process with pid 61139 00:16:54.930 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:54.930 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:54.930 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61139' 00:16:54.930 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61139 00:16:54.930 05:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61139 00:16:54.930 [2024-11-20 05:27:26.524791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.930 [2024-11-20 05:27:26.597233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.494 05:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oaatXWBRSF 00:16:55.494 05:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:55.494 05:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:55.494 ************************************ 00:16:55.494 END TEST raid_read_error_test 00:16:55.494 ************************************ 00:16:55.494 05:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:16:55.494 05:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:55.494 05:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:55.494 05:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:55.494 05:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:16:55.494 00:16:55.494 real 0m3.367s 00:16:55.494 user 0m3.998s 00:16:55.494 sys 0m0.409s 00:16:55.494 05:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:55.495 05:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.495 05:27:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:16:55.495 05:27:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:55.495 05:27:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:55.495 05:27:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.495 ************************************ 00:16:55.495 START TEST raid_write_error_test 00:16:55.495 ************************************ 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZqN4t5oIu7 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61273 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61273 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61273 ']' 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:55.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.495 05:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:55.753 [2024-11-20 05:27:27.360087] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:55.753 [2024-11-20 05:27:27.360221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61273 ] 00:16:55.754 [2024-11-20 05:27:27.511502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.012 [2024-11-20 05:27:27.615390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.012 [2024-11-20 05:27:27.737840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.012 [2024-11-20 05:27:27.737895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.595 BaseBdev1_malloc 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.595 true 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.595 [2024-11-20 05:27:28.249728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:56.595 [2024-11-20 05:27:28.249800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.595 [2024-11-20 05:27:28.249821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:56.595 [2024-11-20 05:27:28.249832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.595 [2024-11-20 05:27:28.251787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.595 [2024-11-20 05:27:28.251850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:56.595 BaseBdev1 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.595 BaseBdev2_malloc 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.595 true 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.595 [2024-11-20 05:27:28.291808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:56.595 [2024-11-20 05:27:28.291866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.595 [2024-11-20 05:27:28.291883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:56.595 [2024-11-20 05:27:28.291891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.595 [2024-11-20 05:27:28.293852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.595 [2024-11-20 05:27:28.293890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:56.595 BaseBdev2 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.595 [2024-11-20 05:27:28.299891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.595 [2024-11-20 05:27:28.301668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.595 [2024-11-20 05:27:28.301857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:56.595 [2024-11-20 05:27:28.301869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:56.595 [2024-11-20 05:27:28.302114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:56.595 [2024-11-20 05:27:28.302262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:56.595 [2024-11-20 05:27:28.302271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:56.595 [2024-11-20 05:27:28.302438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.595 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.596 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.596 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.596 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.596 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.596 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.596 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.596 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.596 "name": "raid_bdev1", 00:16:56.596 "uuid": "f248771c-c922-4196-8391-d6c2ce7e9c75", 00:16:56.596 "strip_size_kb": 64, 00:16:56.596 "state": "online", 00:16:56.596 "raid_level": "concat", 00:16:56.596 "superblock": true, 00:16:56.596 "num_base_bdevs": 2, 00:16:56.596 "num_base_bdevs_discovered": 2, 00:16:56.596 "num_base_bdevs_operational": 2, 00:16:56.596 "base_bdevs_list": [ 00:16:56.596 { 00:16:56.596 "name": "BaseBdev1", 00:16:56.596 "uuid": "293d077e-478e-59aa-ad35-0728eba2bcfe", 00:16:56.596 "is_configured": true, 00:16:56.596 "data_offset": 2048, 00:16:56.596 "data_size": 63488 00:16:56.596 }, 00:16:56.596 { 00:16:56.596 "name": "BaseBdev2", 00:16:56.596 "uuid": "4b5abab2-31c6-5d57-92d9-8e186d3f1f07", 00:16:56.596 "is_configured": true, 00:16:56.596 "data_offset": 2048, 00:16:56.596 "data_size": 63488 00:16:56.596 } 00:16:56.596 ] 00:16:56.596 }' 00:16:56.596 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.596 05:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.853 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:56.853 05:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:57.111 [2024-11-20 05:27:28.736807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.048 "name": "raid_bdev1", 00:16:58.048 "uuid": "f248771c-c922-4196-8391-d6c2ce7e9c75", 00:16:58.048 "strip_size_kb": 64, 00:16:58.048 "state": "online", 00:16:58.048 "raid_level": "concat", 00:16:58.048 "superblock": true, 00:16:58.048 "num_base_bdevs": 2, 00:16:58.048 "num_base_bdevs_discovered": 2, 00:16:58.048 "num_base_bdevs_operational": 2, 00:16:58.048 "base_bdevs_list": [ 00:16:58.048 { 00:16:58.048 "name": "BaseBdev1", 00:16:58.048 "uuid": "293d077e-478e-59aa-ad35-0728eba2bcfe", 00:16:58.048 "is_configured": true, 00:16:58.048 "data_offset": 2048, 00:16:58.048 "data_size": 63488 00:16:58.048 }, 00:16:58.048 { 00:16:58.048 "name": "BaseBdev2", 00:16:58.048 "uuid": "4b5abab2-31c6-5d57-92d9-8e186d3f1f07", 00:16:58.048 "is_configured": true, 00:16:58.048 "data_offset": 2048, 00:16:58.048 "data_size": 63488 00:16:58.048 } 00:16:58.048 ] 00:16:58.048 }' 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.048 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.306 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.306 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.306 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.306 [2024-11-20 05:27:29.986096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.306 [2024-11-20 05:27:29.986145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.306 [2024-11-20 05:27:29.988653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.306 [2024-11-20 05:27:29.988707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.306 [2024-11-20 05:27:29.988738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.306 [2024-11-20 05:27:29.988748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:58.306 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.306 { 00:16:58.306 "results": [ 00:16:58.306 { 00:16:58.306 "job": "raid_bdev1", 00:16:58.306 "core_mask": "0x1", 00:16:58.306 "workload": "randrw", 00:16:58.306 "percentage": 50, 00:16:58.306 "status": "finished", 00:16:58.306 "queue_depth": 1, 00:16:58.306 "io_size": 131072, 00:16:58.306 "runtime": 1.247649, 00:16:58.306 "iops": 16789.978591735337, 00:16:58.306 "mibps": 2098.747323966917, 00:16:58.306 "io_failed": 1, 00:16:58.306 "io_timeout": 0, 00:16:58.306 "avg_latency_us": 82.43268714864304, 00:16:58.306 "min_latency_us": 26.38769230769231, 00:16:58.307 "max_latency_us": 1392.64 00:16:58.307 } 00:16:58.307 ], 00:16:58.307 "core_count": 1 00:16:58.307 } 00:16:58.307 05:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61273 00:16:58.307 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61273 ']' 00:16:58.307 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61273 00:16:58.307 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:16:58.307 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:58.307 05:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61273 00:16:58.307 05:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:58.307 killing process with pid 61273 00:16:58.307 05:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:58.307 05:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61273' 00:16:58.307 05:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61273 00:16:58.307 05:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61273 00:16:58.307 [2024-11-20 05:27:30.015694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.307 [2024-11-20 05:27:30.087542] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZqN4t5oIu7 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:16:59.243 00:16:59.243 real 0m3.441s 00:16:59.243 user 0m4.120s 00:16:59.243 sys 0m0.446s 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:59.243 ************************************ 00:16:59.243 END TEST raid_write_error_test 00:16:59.243 ************************************ 00:16:59.243 05:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.243 05:27:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:59.243 05:27:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:16:59.243 05:27:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:59.244 05:27:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:59.244 05:27:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.244 ************************************ 00:16:59.244 START TEST raid_state_function_test 00:16:59.244 ************************************ 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61406 00:16:59.244 Process raid pid: 61406 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61406' 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61406 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61406 ']' 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.244 05:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:59.244 [2024-11-20 05:27:30.842671] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:16:59.244 [2024-11-20 05:27:30.842854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.244 [2024-11-20 05:27:30.996554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.551 [2024-11-20 05:27:31.117240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.551 [2024-11-20 05:27:31.267353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.551 [2024-11-20 05:27:31.267414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.118 [2024-11-20 05:27:31.727932] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.118 [2024-11-20 05:27:31.728011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.118 [2024-11-20 05:27:31.728025] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.118 [2024-11-20 05:27:31.728037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.118 "name": "Existed_Raid", 00:17:00.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.118 "strip_size_kb": 0, 00:17:00.118 "state": "configuring", 00:17:00.118 "raid_level": "raid1", 00:17:00.118 "superblock": false, 00:17:00.118 "num_base_bdevs": 2, 00:17:00.118 "num_base_bdevs_discovered": 0, 00:17:00.118 "num_base_bdevs_operational": 2, 00:17:00.118 "base_bdevs_list": [ 00:17:00.118 { 00:17:00.118 "name": "BaseBdev1", 00:17:00.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.118 "is_configured": false, 00:17:00.118 "data_offset": 0, 00:17:00.118 "data_size": 0 00:17:00.118 }, 00:17:00.118 { 00:17:00.118 "name": "BaseBdev2", 00:17:00.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.118 "is_configured": false, 00:17:00.118 "data_offset": 0, 00:17:00.118 "data_size": 0 00:17:00.118 } 00:17:00.118 ] 00:17:00.118 }' 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.118 05:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.376 [2024-11-20 05:27:32.067914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.376 [2024-11-20 05:27:32.067956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.376 [2024-11-20 05:27:32.075889] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.376 [2024-11-20 05:27:32.075930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.376 [2024-11-20 05:27:32.075939] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.376 [2024-11-20 05:27:32.075951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.376 [2024-11-20 05:27:32.110702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.376 BaseBdev1 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.376 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.376 [ 00:17:00.376 { 00:17:00.376 "name": "BaseBdev1", 00:17:00.376 "aliases": [ 00:17:00.376 "fc591b44-de4c-4e70-9164-e3999242ab0d" 00:17:00.376 ], 00:17:00.376 "product_name": "Malloc disk", 00:17:00.376 "block_size": 512, 00:17:00.376 "num_blocks": 65536, 00:17:00.376 "uuid": "fc591b44-de4c-4e70-9164-e3999242ab0d", 00:17:00.376 "assigned_rate_limits": { 00:17:00.376 "rw_ios_per_sec": 0, 00:17:00.376 "rw_mbytes_per_sec": 0, 00:17:00.376 "r_mbytes_per_sec": 0, 00:17:00.376 "w_mbytes_per_sec": 0 00:17:00.376 }, 00:17:00.376 "claimed": true, 00:17:00.376 "claim_type": "exclusive_write", 00:17:00.376 "zoned": false, 00:17:00.376 "supported_io_types": { 00:17:00.376 "read": true, 00:17:00.376 "write": true, 00:17:00.376 "unmap": true, 00:17:00.376 "flush": true, 00:17:00.376 "reset": true, 00:17:00.376 "nvme_admin": false, 00:17:00.376 "nvme_io": false, 00:17:00.376 "nvme_io_md": false, 00:17:00.376 "write_zeroes": true, 00:17:00.376 "zcopy": true, 00:17:00.376 "get_zone_info": false, 00:17:00.376 "zone_management": false, 00:17:00.376 "zone_append": false, 00:17:00.376 "compare": false, 00:17:00.376 "compare_and_write": false, 00:17:00.376 "abort": true, 00:17:00.376 "seek_hole": false, 00:17:00.376 "seek_data": false, 00:17:00.376 "copy": true, 00:17:00.376 "nvme_iov_md": false 00:17:00.376 }, 00:17:00.376 "memory_domains": [ 00:17:00.376 { 00:17:00.376 "dma_device_id": "system", 00:17:00.376 "dma_device_type": 1 00:17:00.376 }, 00:17:00.376 { 00:17:00.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.376 "dma_device_type": 2 00:17:00.376 } 00:17:00.376 ], 00:17:00.376 "driver_specific": {} 00:17:00.376 } 00:17:00.376 ] 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.377 "name": "Existed_Raid", 00:17:00.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.377 "strip_size_kb": 0, 00:17:00.377 "state": "configuring", 00:17:00.377 "raid_level": "raid1", 00:17:00.377 "superblock": false, 00:17:00.377 "num_base_bdevs": 2, 00:17:00.377 "num_base_bdevs_discovered": 1, 00:17:00.377 "num_base_bdevs_operational": 2, 00:17:00.377 "base_bdevs_list": [ 00:17:00.377 { 00:17:00.377 "name": "BaseBdev1", 00:17:00.377 "uuid": "fc591b44-de4c-4e70-9164-e3999242ab0d", 00:17:00.377 "is_configured": true, 00:17:00.377 "data_offset": 0, 00:17:00.377 "data_size": 65536 00:17:00.377 }, 00:17:00.377 { 00:17:00.377 "name": "BaseBdev2", 00:17:00.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.377 "is_configured": false, 00:17:00.377 "data_offset": 0, 00:17:00.377 "data_size": 0 00:17:00.377 } 00:17:00.377 ] 00:17:00.377 }' 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.377 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.634 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.634 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.634 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.634 [2024-11-20 05:27:32.466840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.893 [2024-11-20 05:27:32.466903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.893 [2024-11-20 05:27:32.474881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.893 [2024-11-20 05:27:32.476982] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.893 [2024-11-20 05:27:32.477023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.893 "name": "Existed_Raid", 00:17:00.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.893 "strip_size_kb": 0, 00:17:00.893 "state": "configuring", 00:17:00.893 "raid_level": "raid1", 00:17:00.893 "superblock": false, 00:17:00.893 "num_base_bdevs": 2, 00:17:00.893 "num_base_bdevs_discovered": 1, 00:17:00.893 "num_base_bdevs_operational": 2, 00:17:00.893 "base_bdevs_list": [ 00:17:00.893 { 00:17:00.893 "name": "BaseBdev1", 00:17:00.893 "uuid": "fc591b44-de4c-4e70-9164-e3999242ab0d", 00:17:00.893 "is_configured": true, 00:17:00.893 "data_offset": 0, 00:17:00.893 "data_size": 65536 00:17:00.893 }, 00:17:00.893 { 00:17:00.893 "name": "BaseBdev2", 00:17:00.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.893 "is_configured": false, 00:17:00.893 "data_offset": 0, 00:17:00.893 "data_size": 0 00:17:00.893 } 00:17:00.893 ] 00:17:00.893 }' 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.893 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.152 [2024-11-20 05:27:32.843913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.152 [2024-11-20 05:27:32.843968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:01.152 [2024-11-20 05:27:32.843977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:01.152 [2024-11-20 05:27:32.844255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:01.152 [2024-11-20 05:27:32.844434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:01.152 [2024-11-20 05:27:32.844447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:01.152 [2024-11-20 05:27:32.844707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.152 BaseBdev2 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.152 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.152 [ 00:17:01.152 { 00:17:01.153 "name": "BaseBdev2", 00:17:01.153 "aliases": [ 00:17:01.153 "923ba46c-27e6-4e11-9b47-356061b40edc" 00:17:01.153 ], 00:17:01.153 "product_name": "Malloc disk", 00:17:01.153 "block_size": 512, 00:17:01.153 "num_blocks": 65536, 00:17:01.153 "uuid": "923ba46c-27e6-4e11-9b47-356061b40edc", 00:17:01.153 "assigned_rate_limits": { 00:17:01.153 "rw_ios_per_sec": 0, 00:17:01.153 "rw_mbytes_per_sec": 0, 00:17:01.153 "r_mbytes_per_sec": 0, 00:17:01.153 "w_mbytes_per_sec": 0 00:17:01.153 }, 00:17:01.153 "claimed": true, 00:17:01.153 "claim_type": "exclusive_write", 00:17:01.153 "zoned": false, 00:17:01.153 "supported_io_types": { 00:17:01.153 "read": true, 00:17:01.153 "write": true, 00:17:01.153 "unmap": true, 00:17:01.153 "flush": true, 00:17:01.153 "reset": true, 00:17:01.153 "nvme_admin": false, 00:17:01.153 "nvme_io": false, 00:17:01.153 "nvme_io_md": false, 00:17:01.153 "write_zeroes": true, 00:17:01.153 "zcopy": true, 00:17:01.153 "get_zone_info": false, 00:17:01.153 "zone_management": false, 00:17:01.153 "zone_append": false, 00:17:01.153 "compare": false, 00:17:01.153 "compare_and_write": false, 00:17:01.153 "abort": true, 00:17:01.153 "seek_hole": false, 00:17:01.153 "seek_data": false, 00:17:01.153 "copy": true, 00:17:01.153 "nvme_iov_md": false 00:17:01.153 }, 00:17:01.153 "memory_domains": [ 00:17:01.153 { 00:17:01.153 "dma_device_id": "system", 00:17:01.153 "dma_device_type": 1 00:17:01.153 }, 00:17:01.153 { 00:17:01.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.153 "dma_device_type": 2 00:17:01.153 } 00:17:01.153 ], 00:17:01.153 "driver_specific": {} 00:17:01.153 } 00:17:01.153 ] 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.153 "name": "Existed_Raid", 00:17:01.153 "uuid": "c3bf795a-c432-45bc-b5fb-dac26f698e5f", 00:17:01.153 "strip_size_kb": 0, 00:17:01.153 "state": "online", 00:17:01.153 "raid_level": "raid1", 00:17:01.153 "superblock": false, 00:17:01.153 "num_base_bdevs": 2, 00:17:01.153 "num_base_bdevs_discovered": 2, 00:17:01.153 "num_base_bdevs_operational": 2, 00:17:01.153 "base_bdevs_list": [ 00:17:01.153 { 00:17:01.153 "name": "BaseBdev1", 00:17:01.153 "uuid": "fc591b44-de4c-4e70-9164-e3999242ab0d", 00:17:01.153 "is_configured": true, 00:17:01.153 "data_offset": 0, 00:17:01.153 "data_size": 65536 00:17:01.153 }, 00:17:01.153 { 00:17:01.153 "name": "BaseBdev2", 00:17:01.153 "uuid": "923ba46c-27e6-4e11-9b47-356061b40edc", 00:17:01.153 "is_configured": true, 00:17:01.153 "data_offset": 0, 00:17:01.153 "data_size": 65536 00:17:01.153 } 00:17:01.153 ] 00:17:01.153 }' 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.153 05:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:01.411 [2024-11-20 05:27:33.188358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.411 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:01.411 "name": "Existed_Raid", 00:17:01.411 "aliases": [ 00:17:01.411 "c3bf795a-c432-45bc-b5fb-dac26f698e5f" 00:17:01.411 ], 00:17:01.411 "product_name": "Raid Volume", 00:17:01.411 "block_size": 512, 00:17:01.411 "num_blocks": 65536, 00:17:01.411 "uuid": "c3bf795a-c432-45bc-b5fb-dac26f698e5f", 00:17:01.411 "assigned_rate_limits": { 00:17:01.411 "rw_ios_per_sec": 0, 00:17:01.411 "rw_mbytes_per_sec": 0, 00:17:01.411 "r_mbytes_per_sec": 0, 00:17:01.411 "w_mbytes_per_sec": 0 00:17:01.411 }, 00:17:01.411 "claimed": false, 00:17:01.411 "zoned": false, 00:17:01.411 "supported_io_types": { 00:17:01.411 "read": true, 00:17:01.411 "write": true, 00:17:01.411 "unmap": false, 00:17:01.411 "flush": false, 00:17:01.411 "reset": true, 00:17:01.411 "nvme_admin": false, 00:17:01.411 "nvme_io": false, 00:17:01.411 "nvme_io_md": false, 00:17:01.411 "write_zeroes": true, 00:17:01.411 "zcopy": false, 00:17:01.411 "get_zone_info": false, 00:17:01.411 "zone_management": false, 00:17:01.411 "zone_append": false, 00:17:01.411 "compare": false, 00:17:01.411 "compare_and_write": false, 00:17:01.411 "abort": false, 00:17:01.411 "seek_hole": false, 00:17:01.411 "seek_data": false, 00:17:01.411 "copy": false, 00:17:01.411 "nvme_iov_md": false 00:17:01.411 }, 00:17:01.411 "memory_domains": [ 00:17:01.411 { 00:17:01.411 "dma_device_id": "system", 00:17:01.411 "dma_device_type": 1 00:17:01.411 }, 00:17:01.411 { 00:17:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.411 "dma_device_type": 2 00:17:01.411 }, 00:17:01.411 { 00:17:01.411 "dma_device_id": "system", 00:17:01.411 "dma_device_type": 1 00:17:01.411 }, 00:17:01.411 { 00:17:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.411 "dma_device_type": 2 00:17:01.411 } 00:17:01.411 ], 00:17:01.411 "driver_specific": { 00:17:01.411 "raid": { 00:17:01.411 "uuid": "c3bf795a-c432-45bc-b5fb-dac26f698e5f", 00:17:01.411 "strip_size_kb": 0, 00:17:01.411 "state": "online", 00:17:01.411 "raid_level": "raid1", 00:17:01.411 "superblock": false, 00:17:01.411 "num_base_bdevs": 2, 00:17:01.411 "num_base_bdevs_discovered": 2, 00:17:01.411 "num_base_bdevs_operational": 2, 00:17:01.411 "base_bdevs_list": [ 00:17:01.411 { 00:17:01.411 "name": "BaseBdev1", 00:17:01.411 "uuid": "fc591b44-de4c-4e70-9164-e3999242ab0d", 00:17:01.411 "is_configured": true, 00:17:01.411 "data_offset": 0, 00:17:01.411 "data_size": 65536 00:17:01.411 }, 00:17:01.411 { 00:17:01.411 "name": "BaseBdev2", 00:17:01.411 "uuid": "923ba46c-27e6-4e11-9b47-356061b40edc", 00:17:01.411 "is_configured": true, 00:17:01.412 "data_offset": 0, 00:17:01.412 "data_size": 65536 00:17:01.412 } 00:17:01.412 ] 00:17:01.412 } 00:17:01.412 } 00:17:01.412 }' 00:17:01.412 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:01.670 BaseBdev2' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.670 [2024-11-20 05:27:33.348150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.670 "name": "Existed_Raid", 00:17:01.670 "uuid": "c3bf795a-c432-45bc-b5fb-dac26f698e5f", 00:17:01.670 "strip_size_kb": 0, 00:17:01.670 "state": "online", 00:17:01.670 "raid_level": "raid1", 00:17:01.670 "superblock": false, 00:17:01.670 "num_base_bdevs": 2, 00:17:01.670 "num_base_bdevs_discovered": 1, 00:17:01.670 "num_base_bdevs_operational": 1, 00:17:01.670 "base_bdevs_list": [ 00:17:01.670 { 00:17:01.670 "name": null, 00:17:01.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.670 "is_configured": false, 00:17:01.670 "data_offset": 0, 00:17:01.670 "data_size": 65536 00:17:01.670 }, 00:17:01.670 { 00:17:01.670 "name": "BaseBdev2", 00:17:01.670 "uuid": "923ba46c-27e6-4e11-9b47-356061b40edc", 00:17:01.670 "is_configured": true, 00:17:01.670 "data_offset": 0, 00:17:01.670 "data_size": 65536 00:17:01.670 } 00:17:01.670 ] 00:17:01.670 }' 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.670 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.928 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.928 [2024-11-20 05:27:33.745868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:01.928 [2024-11-20 05:27:33.746120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.187 [2024-11-20 05:27:33.796420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.187 [2024-11-20 05:27:33.796669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.187 [2024-11-20 05:27:33.796737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61406 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61406 ']' 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61406 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61406 00:17:02.187 killing process with pid 61406 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61406' 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61406 00:17:02.187 [2024-11-20 05:27:33.860995] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.187 05:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61406 00:17:02.187 [2024-11-20 05:27:33.870033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.754 ************************************ 00:17:02.754 END TEST raid_state_function_test 00:17:02.754 ************************************ 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:02.754 00:17:02.754 real 0m3.715s 00:17:02.754 user 0m5.421s 00:17:02.754 sys 0m0.592s 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 05:27:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:17:02.754 05:27:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:02.754 05:27:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:02.754 05:27:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 ************************************ 00:17:02.754 START TEST raid_state_function_test_sb 00:17:02.754 ************************************ 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:02.754 Process raid pid: 61637 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61637 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61637' 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61637 00:17:02.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61637 ']' 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 05:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:03.012 [2024-11-20 05:27:34.604107] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:03.012 [2024-11-20 05:27:34.604218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.012 [2024-11-20 05:27:34.774948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.270 [2024-11-20 05:27:34.894456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.270 [2024-11-20 05:27:35.042416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.270 [2024-11-20 05:27:35.042471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.836 [2024-11-20 05:27:35.421853] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.836 [2024-11-20 05:27:35.421910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.836 [2024-11-20 05:27:35.421921] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.836 [2024-11-20 05:27:35.421931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.836 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.837 "name": "Existed_Raid", 00:17:03.837 "uuid": "c4c02f58-a1d3-4cc0-9078-67c0d43b9b3d", 00:17:03.837 "strip_size_kb": 0, 00:17:03.837 "state": "configuring", 00:17:03.837 "raid_level": "raid1", 00:17:03.837 "superblock": true, 00:17:03.837 "num_base_bdevs": 2, 00:17:03.837 "num_base_bdevs_discovered": 0, 00:17:03.837 "num_base_bdevs_operational": 2, 00:17:03.837 "base_bdevs_list": [ 00:17:03.837 { 00:17:03.837 "name": "BaseBdev1", 00:17:03.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.837 "is_configured": false, 00:17:03.837 "data_offset": 0, 00:17:03.837 "data_size": 0 00:17:03.837 }, 00:17:03.837 { 00:17:03.837 "name": "BaseBdev2", 00:17:03.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.837 "is_configured": false, 00:17:03.837 "data_offset": 0, 00:17:03.837 "data_size": 0 00:17:03.837 } 00:17:03.837 ] 00:17:03.837 }' 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.837 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.095 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:04.095 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.095 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.095 [2024-11-20 05:27:35.733868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.095 [2024-11-20 05:27:35.733905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:04.095 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.095 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:04.095 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.095 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.095 [2024-11-20 05:27:35.741860] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:04.095 [2024-11-20 05:27:35.741900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:04.096 [2024-11-20 05:27:35.741909] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.096 [2024-11-20 05:27:35.741920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.096 [2024-11-20 05:27:35.776334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.096 BaseBdev1 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.096 [ 00:17:04.096 { 00:17:04.096 "name": "BaseBdev1", 00:17:04.096 "aliases": [ 00:17:04.096 "5d114465-1c2b-4a64-bab2-6f58ca1a31c6" 00:17:04.096 ], 00:17:04.096 "product_name": "Malloc disk", 00:17:04.096 "block_size": 512, 00:17:04.096 "num_blocks": 65536, 00:17:04.096 "uuid": "5d114465-1c2b-4a64-bab2-6f58ca1a31c6", 00:17:04.096 "assigned_rate_limits": { 00:17:04.096 "rw_ios_per_sec": 0, 00:17:04.096 "rw_mbytes_per_sec": 0, 00:17:04.096 "r_mbytes_per_sec": 0, 00:17:04.096 "w_mbytes_per_sec": 0 00:17:04.096 }, 00:17:04.096 "claimed": true, 00:17:04.096 "claim_type": "exclusive_write", 00:17:04.096 "zoned": false, 00:17:04.096 "supported_io_types": { 00:17:04.096 "read": true, 00:17:04.096 "write": true, 00:17:04.096 "unmap": true, 00:17:04.096 "flush": true, 00:17:04.096 "reset": true, 00:17:04.096 "nvme_admin": false, 00:17:04.096 "nvme_io": false, 00:17:04.096 "nvme_io_md": false, 00:17:04.096 "write_zeroes": true, 00:17:04.096 "zcopy": true, 00:17:04.096 "get_zone_info": false, 00:17:04.096 "zone_management": false, 00:17:04.096 "zone_append": false, 00:17:04.096 "compare": false, 00:17:04.096 "compare_and_write": false, 00:17:04.096 "abort": true, 00:17:04.096 "seek_hole": false, 00:17:04.096 "seek_data": false, 00:17:04.096 "copy": true, 00:17:04.096 "nvme_iov_md": false 00:17:04.096 }, 00:17:04.096 "memory_domains": [ 00:17:04.096 { 00:17:04.096 "dma_device_id": "system", 00:17:04.096 "dma_device_type": 1 00:17:04.096 }, 00:17:04.096 { 00:17:04.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.096 "dma_device_type": 2 00:17:04.096 } 00:17:04.096 ], 00:17:04.096 "driver_specific": {} 00:17:04.096 } 00:17:04.096 ] 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.096 "name": "Existed_Raid", 00:17:04.096 "uuid": "e7a56dc2-3525-44f4-ab23-6817d08a0d4e", 00:17:04.096 "strip_size_kb": 0, 00:17:04.096 "state": "configuring", 00:17:04.096 "raid_level": "raid1", 00:17:04.096 "superblock": true, 00:17:04.096 "num_base_bdevs": 2, 00:17:04.096 "num_base_bdevs_discovered": 1, 00:17:04.096 "num_base_bdevs_operational": 2, 00:17:04.096 "base_bdevs_list": [ 00:17:04.096 { 00:17:04.096 "name": "BaseBdev1", 00:17:04.096 "uuid": "5d114465-1c2b-4a64-bab2-6f58ca1a31c6", 00:17:04.096 "is_configured": true, 00:17:04.096 "data_offset": 2048, 00:17:04.096 "data_size": 63488 00:17:04.096 }, 00:17:04.096 { 00:17:04.096 "name": "BaseBdev2", 00:17:04.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.096 "is_configured": false, 00:17:04.096 "data_offset": 0, 00:17:04.096 "data_size": 0 00:17:04.096 } 00:17:04.096 ] 00:17:04.096 }' 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.096 05:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.394 [2024-11-20 05:27:36.104472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.394 [2024-11-20 05:27:36.104677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.394 [2024-11-20 05:27:36.112536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.394 [2024-11-20 05:27:36.114540] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.394 [2024-11-20 05:27:36.114584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.394 "name": "Existed_Raid", 00:17:04.394 "uuid": "7a6a9580-6574-44f7-a178-9c084b8de707", 00:17:04.394 "strip_size_kb": 0, 00:17:04.394 "state": "configuring", 00:17:04.394 "raid_level": "raid1", 00:17:04.394 "superblock": true, 00:17:04.394 "num_base_bdevs": 2, 00:17:04.394 "num_base_bdevs_discovered": 1, 00:17:04.394 "num_base_bdevs_operational": 2, 00:17:04.394 "base_bdevs_list": [ 00:17:04.394 { 00:17:04.394 "name": "BaseBdev1", 00:17:04.394 "uuid": "5d114465-1c2b-4a64-bab2-6f58ca1a31c6", 00:17:04.394 "is_configured": true, 00:17:04.394 "data_offset": 2048, 00:17:04.394 "data_size": 63488 00:17:04.394 }, 00:17:04.394 { 00:17:04.394 "name": "BaseBdev2", 00:17:04.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.394 "is_configured": false, 00:17:04.394 "data_offset": 0, 00:17:04.394 "data_size": 0 00:17:04.394 } 00:17:04.394 ] 00:17:04.394 }' 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.394 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 BaseBdev2 00:17:04.675 [2024-11-20 05:27:36.453141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.675 [2024-11-20 05:27:36.453391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:04.675 [2024-11-20 05:27:36.453405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.675 [2024-11-20 05:27:36.453681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:04.675 [2024-11-20 05:27:36.453822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:04.675 [2024-11-20 05:27:36.453833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:04.675 [2024-11-20 05:27:36.453970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.675 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.676 [ 00:17:04.676 { 00:17:04.676 "name": "BaseBdev2", 00:17:04.676 "aliases": [ 00:17:04.676 "070c9094-f9f5-4b73-8aff-3f4f1b0887a6" 00:17:04.676 ], 00:17:04.676 "product_name": "Malloc disk", 00:17:04.676 "block_size": 512, 00:17:04.676 "num_blocks": 65536, 00:17:04.676 "uuid": "070c9094-f9f5-4b73-8aff-3f4f1b0887a6", 00:17:04.676 "assigned_rate_limits": { 00:17:04.676 "rw_ios_per_sec": 0, 00:17:04.676 "rw_mbytes_per_sec": 0, 00:17:04.676 "r_mbytes_per_sec": 0, 00:17:04.676 "w_mbytes_per_sec": 0 00:17:04.676 }, 00:17:04.676 "claimed": true, 00:17:04.676 "claim_type": "exclusive_write", 00:17:04.676 "zoned": false, 00:17:04.676 "supported_io_types": { 00:17:04.676 "read": true, 00:17:04.676 "write": true, 00:17:04.676 "unmap": true, 00:17:04.676 "flush": true, 00:17:04.676 "reset": true, 00:17:04.676 "nvme_admin": false, 00:17:04.676 "nvme_io": false, 00:17:04.676 "nvme_io_md": false, 00:17:04.676 "write_zeroes": true, 00:17:04.676 "zcopy": true, 00:17:04.676 "get_zone_info": false, 00:17:04.676 "zone_management": false, 00:17:04.676 "zone_append": false, 00:17:04.676 "compare": false, 00:17:04.676 "compare_and_write": false, 00:17:04.676 "abort": true, 00:17:04.676 "seek_hole": false, 00:17:04.676 "seek_data": false, 00:17:04.676 "copy": true, 00:17:04.676 "nvme_iov_md": false 00:17:04.676 }, 00:17:04.676 "memory_domains": [ 00:17:04.676 { 00:17:04.676 "dma_device_id": "system", 00:17:04.676 "dma_device_type": 1 00:17:04.676 }, 00:17:04.676 { 00:17:04.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.676 "dma_device_type": 2 00:17:04.676 } 00:17:04.676 ], 00:17:04.676 "driver_specific": {} 00:17:04.676 } 00:17:04.676 ] 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.676 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.934 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.934 "name": "Existed_Raid", 00:17:04.934 "uuid": "7a6a9580-6574-44f7-a178-9c084b8de707", 00:17:04.934 "strip_size_kb": 0, 00:17:04.934 "state": "online", 00:17:04.934 "raid_level": "raid1", 00:17:04.934 "superblock": true, 00:17:04.934 "num_base_bdevs": 2, 00:17:04.934 "num_base_bdevs_discovered": 2, 00:17:04.934 "num_base_bdevs_operational": 2, 00:17:04.934 "base_bdevs_list": [ 00:17:04.934 { 00:17:04.934 "name": "BaseBdev1", 00:17:04.934 "uuid": "5d114465-1c2b-4a64-bab2-6f58ca1a31c6", 00:17:04.934 "is_configured": true, 00:17:04.934 "data_offset": 2048, 00:17:04.934 "data_size": 63488 00:17:04.934 }, 00:17:04.934 { 00:17:04.934 "name": "BaseBdev2", 00:17:04.934 "uuid": "070c9094-f9f5-4b73-8aff-3f4f1b0887a6", 00:17:04.934 "is_configured": true, 00:17:04.934 "data_offset": 2048, 00:17:04.934 "data_size": 63488 00:17:04.934 } 00:17:04.934 ] 00:17:04.934 }' 00:17:04.934 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.935 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:05.194 [2024-11-20 05:27:36.809598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:05.194 "name": "Existed_Raid", 00:17:05.194 "aliases": [ 00:17:05.194 "7a6a9580-6574-44f7-a178-9c084b8de707" 00:17:05.194 ], 00:17:05.194 "product_name": "Raid Volume", 00:17:05.194 "block_size": 512, 00:17:05.194 "num_blocks": 63488, 00:17:05.194 "uuid": "7a6a9580-6574-44f7-a178-9c084b8de707", 00:17:05.194 "assigned_rate_limits": { 00:17:05.194 "rw_ios_per_sec": 0, 00:17:05.194 "rw_mbytes_per_sec": 0, 00:17:05.194 "r_mbytes_per_sec": 0, 00:17:05.194 "w_mbytes_per_sec": 0 00:17:05.194 }, 00:17:05.194 "claimed": false, 00:17:05.194 "zoned": false, 00:17:05.194 "supported_io_types": { 00:17:05.194 "read": true, 00:17:05.194 "write": true, 00:17:05.194 "unmap": false, 00:17:05.194 "flush": false, 00:17:05.194 "reset": true, 00:17:05.194 "nvme_admin": false, 00:17:05.194 "nvme_io": false, 00:17:05.194 "nvme_io_md": false, 00:17:05.194 "write_zeroes": true, 00:17:05.194 "zcopy": false, 00:17:05.194 "get_zone_info": false, 00:17:05.194 "zone_management": false, 00:17:05.194 "zone_append": false, 00:17:05.194 "compare": false, 00:17:05.194 "compare_and_write": false, 00:17:05.194 "abort": false, 00:17:05.194 "seek_hole": false, 00:17:05.194 "seek_data": false, 00:17:05.194 "copy": false, 00:17:05.194 "nvme_iov_md": false 00:17:05.194 }, 00:17:05.194 "memory_domains": [ 00:17:05.194 { 00:17:05.194 "dma_device_id": "system", 00:17:05.194 "dma_device_type": 1 00:17:05.194 }, 00:17:05.194 { 00:17:05.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.194 "dma_device_type": 2 00:17:05.194 }, 00:17:05.194 { 00:17:05.194 "dma_device_id": "system", 00:17:05.194 "dma_device_type": 1 00:17:05.194 }, 00:17:05.194 { 00:17:05.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.194 "dma_device_type": 2 00:17:05.194 } 00:17:05.194 ], 00:17:05.194 "driver_specific": { 00:17:05.194 "raid": { 00:17:05.194 "uuid": "7a6a9580-6574-44f7-a178-9c084b8de707", 00:17:05.194 "strip_size_kb": 0, 00:17:05.194 "state": "online", 00:17:05.194 "raid_level": "raid1", 00:17:05.194 "superblock": true, 00:17:05.194 "num_base_bdevs": 2, 00:17:05.194 "num_base_bdevs_discovered": 2, 00:17:05.194 "num_base_bdevs_operational": 2, 00:17:05.194 "base_bdevs_list": [ 00:17:05.194 { 00:17:05.194 "name": "BaseBdev1", 00:17:05.194 "uuid": "5d114465-1c2b-4a64-bab2-6f58ca1a31c6", 00:17:05.194 "is_configured": true, 00:17:05.194 "data_offset": 2048, 00:17:05.194 "data_size": 63488 00:17:05.194 }, 00:17:05.194 { 00:17:05.194 "name": "BaseBdev2", 00:17:05.194 "uuid": "070c9094-f9f5-4b73-8aff-3f4f1b0887a6", 00:17:05.194 "is_configured": true, 00:17:05.194 "data_offset": 2048, 00:17:05.194 "data_size": 63488 00:17:05.194 } 00:17:05.194 ] 00:17:05.194 } 00:17:05.194 } 00:17:05.194 }' 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:05.194 BaseBdev2' 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.194 05:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.194 [2024-11-20 05:27:36.973379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.454 "name": "Existed_Raid", 00:17:05.454 "uuid": "7a6a9580-6574-44f7-a178-9c084b8de707", 00:17:05.454 "strip_size_kb": 0, 00:17:05.454 "state": "online", 00:17:05.454 "raid_level": "raid1", 00:17:05.454 "superblock": true, 00:17:05.454 "num_base_bdevs": 2, 00:17:05.454 "num_base_bdevs_discovered": 1, 00:17:05.454 "num_base_bdevs_operational": 1, 00:17:05.454 "base_bdevs_list": [ 00:17:05.454 { 00:17:05.454 "name": null, 00:17:05.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.454 "is_configured": false, 00:17:05.454 "data_offset": 0, 00:17:05.454 "data_size": 63488 00:17:05.454 }, 00:17:05.454 { 00:17:05.454 "name": "BaseBdev2", 00:17:05.454 "uuid": "070c9094-f9f5-4b73-8aff-3f4f1b0887a6", 00:17:05.454 "is_configured": true, 00:17:05.454 "data_offset": 2048, 00:17:05.454 "data_size": 63488 00:17:05.454 } 00:17:05.454 ] 00:17:05.454 }' 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.454 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.713 [2024-11-20 05:27:37.388095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:05.713 [2024-11-20 05:27:37.388212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.713 [2024-11-20 05:27:37.451524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.713 [2024-11-20 05:27:37.451592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.713 [2024-11-20 05:27:37.451604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61637 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61637 ']' 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61637 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61637 00:17:05.713 killing process with pid 61637 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61637' 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61637 00:17:05.713 [2024-11-20 05:27:37.512152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.713 05:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61637 00:17:05.713 [2024-11-20 05:27:37.523221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.647 05:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:06.647 00:17:06.647 real 0m3.738s 00:17:06.647 user 0m5.283s 00:17:06.647 sys 0m0.627s 00:17:06.647 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:06.647 05:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.647 ************************************ 00:17:06.647 END TEST raid_state_function_test_sb 00:17:06.647 ************************************ 00:17:06.647 05:27:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:17:06.647 05:27:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:06.647 05:27:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:06.647 05:27:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.647 ************************************ 00:17:06.647 START TEST raid_superblock_test 00:17:06.647 ************************************ 00:17:06.647 05:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:06.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61878 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61878 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61878 ']' 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.648 05:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:06.648 [2024-11-20 05:27:38.378717] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:06.648 [2024-11-20 05:27:38.378818] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61878 ] 00:17:06.999 [2024-11-20 05:27:38.535137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.999 [2024-11-20 05:27:38.653137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.999 [2024-11-20 05:27:38.800809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.999 [2024-11-20 05:27:38.800875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.565 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 malloc1 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 [2024-11-20 05:27:39.281273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.566 [2024-11-20 05:27:39.281348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.566 [2024-11-20 05:27:39.281387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:07.566 [2024-11-20 05:27:39.281399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.566 [2024-11-20 05:27:39.283674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.566 [2024-11-20 05:27:39.283710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.566 pt1 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 malloc2 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 [2024-11-20 05:27:39.319447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.566 [2024-11-20 05:27:39.319509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.566 [2024-11-20 05:27:39.319534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:07.566 [2024-11-20 05:27:39.319543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.566 [2024-11-20 05:27:39.321794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.566 [2024-11-20 05:27:39.321956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.566 pt2 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 [2024-11-20 05:27:39.327514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:07.566 [2024-11-20 05:27:39.329513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.566 [2024-11-20 05:27:39.329683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.566 [2024-11-20 05:27:39.329698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:07.566 [2024-11-20 05:27:39.329967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:07.566 [2024-11-20 05:27:39.330115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.566 [2024-11-20 05:27:39.330129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.566 [2024-11-20 05:27:39.330286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.566 "name": "raid_bdev1", 00:17:07.566 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:07.566 "strip_size_kb": 0, 00:17:07.566 "state": "online", 00:17:07.566 "raid_level": "raid1", 00:17:07.566 "superblock": true, 00:17:07.566 "num_base_bdevs": 2, 00:17:07.566 "num_base_bdevs_discovered": 2, 00:17:07.566 "num_base_bdevs_operational": 2, 00:17:07.566 "base_bdevs_list": [ 00:17:07.566 { 00:17:07.566 "name": "pt1", 00:17:07.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.566 "is_configured": true, 00:17:07.566 "data_offset": 2048, 00:17:07.566 "data_size": 63488 00:17:07.566 }, 00:17:07.566 { 00:17:07.566 "name": "pt2", 00:17:07.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.566 "is_configured": true, 00:17:07.566 "data_offset": 2048, 00:17:07.566 "data_size": 63488 00:17:07.566 } 00:17:07.566 ] 00:17:07.566 }' 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.566 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.824 [2024-11-20 05:27:39.623868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.824 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.824 "name": "raid_bdev1", 00:17:07.824 "aliases": [ 00:17:07.824 "497a15ed-4521-415c-9d01-9badd0819e3c" 00:17:07.824 ], 00:17:07.824 "product_name": "Raid Volume", 00:17:07.824 "block_size": 512, 00:17:07.824 "num_blocks": 63488, 00:17:07.824 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:07.824 "assigned_rate_limits": { 00:17:07.824 "rw_ios_per_sec": 0, 00:17:07.824 "rw_mbytes_per_sec": 0, 00:17:07.824 "r_mbytes_per_sec": 0, 00:17:07.824 "w_mbytes_per_sec": 0 00:17:07.824 }, 00:17:07.824 "claimed": false, 00:17:07.824 "zoned": false, 00:17:07.824 "supported_io_types": { 00:17:07.824 "read": true, 00:17:07.824 "write": true, 00:17:07.824 "unmap": false, 00:17:07.824 "flush": false, 00:17:07.824 "reset": true, 00:17:07.824 "nvme_admin": false, 00:17:07.824 "nvme_io": false, 00:17:07.824 "nvme_io_md": false, 00:17:07.824 "write_zeroes": true, 00:17:07.824 "zcopy": false, 00:17:07.824 "get_zone_info": false, 00:17:07.824 "zone_management": false, 00:17:07.824 "zone_append": false, 00:17:07.824 "compare": false, 00:17:07.824 "compare_and_write": false, 00:17:07.824 "abort": false, 00:17:07.824 "seek_hole": false, 00:17:07.824 "seek_data": false, 00:17:07.824 "copy": false, 00:17:07.824 "nvme_iov_md": false 00:17:07.824 }, 00:17:07.824 "memory_domains": [ 00:17:07.824 { 00:17:07.824 "dma_device_id": "system", 00:17:07.824 "dma_device_type": 1 00:17:07.824 }, 00:17:07.824 { 00:17:07.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.824 "dma_device_type": 2 00:17:07.824 }, 00:17:07.824 { 00:17:07.824 "dma_device_id": "system", 00:17:07.824 "dma_device_type": 1 00:17:07.824 }, 00:17:07.824 { 00:17:07.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.824 "dma_device_type": 2 00:17:07.824 } 00:17:07.824 ], 00:17:07.824 "driver_specific": { 00:17:07.824 "raid": { 00:17:07.824 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:07.824 "strip_size_kb": 0, 00:17:07.824 "state": "online", 00:17:07.824 "raid_level": "raid1", 00:17:07.824 "superblock": true, 00:17:07.824 "num_base_bdevs": 2, 00:17:07.824 "num_base_bdevs_discovered": 2, 00:17:07.824 "num_base_bdevs_operational": 2, 00:17:07.824 "base_bdevs_list": [ 00:17:07.824 { 00:17:07.824 "name": "pt1", 00:17:07.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.824 "is_configured": true, 00:17:07.824 "data_offset": 2048, 00:17:07.824 "data_size": 63488 00:17:07.824 }, 00:17:07.824 { 00:17:07.824 "name": "pt2", 00:17:07.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.824 "is_configured": true, 00:17:07.824 "data_offset": 2048, 00:17:07.824 "data_size": 63488 00:17:07.824 } 00:17:07.825 ] 00:17:07.825 } 00:17:07.825 } 00:17:07.825 }' 00:17:07.825 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:08.083 pt2' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.083 [2024-11-20 05:27:39.787897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=497a15ed-4521-415c-9d01-9badd0819e3c 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 497a15ed-4521-415c-9d01-9badd0819e3c ']' 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.083 [2024-11-20 05:27:39.811547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.083 [2024-11-20 05:27:39.811574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.083 [2024-11-20 05:27:39.811669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.083 [2024-11-20 05:27:39.811739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.083 [2024-11-20 05:27:39.811751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.083 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.084 [2024-11-20 05:27:39.907612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:08.084 [2024-11-20 05:27:39.909664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:08.084 [2024-11-20 05:27:39.909736] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:08.084 [2024-11-20 05:27:39.909795] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:08.084 [2024-11-20 05:27:39.909810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.084 [2024-11-20 05:27:39.909821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:08.084 request: 00:17:08.084 { 00:17:08.084 "name": "raid_bdev1", 00:17:08.084 "raid_level": "raid1", 00:17:08.084 "base_bdevs": [ 00:17:08.084 "malloc1", 00:17:08.084 "malloc2" 00:17:08.084 ], 00:17:08.084 "superblock": false, 00:17:08.084 "method": "bdev_raid_create", 00:17:08.084 "req_id": 1 00:17:08.084 } 00:17:08.084 Got JSON-RPC error response 00:17:08.084 response: 00:17:08.084 { 00:17:08.084 "code": -17, 00:17:08.084 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:08.084 } 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:08.084 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.342 [2024-11-20 05:27:39.947609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:08.342 [2024-11-20 05:27:39.947676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.342 [2024-11-20 05:27:39.947697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:08.342 [2024-11-20 05:27:39.947708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.342 [2024-11-20 05:27:39.950056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.342 [2024-11-20 05:27:39.950094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:08.342 [2024-11-20 05:27:39.950187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:08.342 [2024-11-20 05:27:39.950255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.342 pt1 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.342 "name": "raid_bdev1", 00:17:08.342 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:08.342 "strip_size_kb": 0, 00:17:08.342 "state": "configuring", 00:17:08.342 "raid_level": "raid1", 00:17:08.342 "superblock": true, 00:17:08.342 "num_base_bdevs": 2, 00:17:08.342 "num_base_bdevs_discovered": 1, 00:17:08.342 "num_base_bdevs_operational": 2, 00:17:08.342 "base_bdevs_list": [ 00:17:08.342 { 00:17:08.342 "name": "pt1", 00:17:08.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.342 "is_configured": true, 00:17:08.342 "data_offset": 2048, 00:17:08.342 "data_size": 63488 00:17:08.342 }, 00:17:08.342 { 00:17:08.342 "name": null, 00:17:08.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.342 "is_configured": false, 00:17:08.342 "data_offset": 2048, 00:17:08.342 "data_size": 63488 00:17:08.342 } 00:17:08.342 ] 00:17:08.342 }' 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.342 05:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.601 [2024-11-20 05:27:40.279702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.601 [2024-11-20 05:27:40.279788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.601 [2024-11-20 05:27:40.279826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:08.601 [2024-11-20 05:27:40.279838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.601 [2024-11-20 05:27:40.280319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.601 [2024-11-20 05:27:40.280342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.601 [2024-11-20 05:27:40.280441] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:08.601 [2024-11-20 05:27:40.280467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.601 [2024-11-20 05:27:40.280590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:08.601 [2024-11-20 05:27:40.280602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:08.601 [2024-11-20 05:27:40.280848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:08.601 [2024-11-20 05:27:40.280991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:08.601 [2024-11-20 05:27:40.281000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:08.601 [2024-11-20 05:27:40.281136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.601 pt2 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.601 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.602 "name": "raid_bdev1", 00:17:08.602 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:08.602 "strip_size_kb": 0, 00:17:08.602 "state": "online", 00:17:08.602 "raid_level": "raid1", 00:17:08.602 "superblock": true, 00:17:08.602 "num_base_bdevs": 2, 00:17:08.602 "num_base_bdevs_discovered": 2, 00:17:08.602 "num_base_bdevs_operational": 2, 00:17:08.602 "base_bdevs_list": [ 00:17:08.602 { 00:17:08.602 "name": "pt1", 00:17:08.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.602 "is_configured": true, 00:17:08.602 "data_offset": 2048, 00:17:08.602 "data_size": 63488 00:17:08.602 }, 00:17:08.602 { 00:17:08.602 "name": "pt2", 00:17:08.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.602 "is_configured": true, 00:17:08.602 "data_offset": 2048, 00:17:08.602 "data_size": 63488 00:17:08.602 } 00:17:08.602 ] 00:17:08.602 }' 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.602 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.860 [2024-11-20 05:27:40.640005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.860 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.860 "name": "raid_bdev1", 00:17:08.860 "aliases": [ 00:17:08.860 "497a15ed-4521-415c-9d01-9badd0819e3c" 00:17:08.860 ], 00:17:08.860 "product_name": "Raid Volume", 00:17:08.860 "block_size": 512, 00:17:08.860 "num_blocks": 63488, 00:17:08.860 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:08.860 "assigned_rate_limits": { 00:17:08.860 "rw_ios_per_sec": 0, 00:17:08.860 "rw_mbytes_per_sec": 0, 00:17:08.860 "r_mbytes_per_sec": 0, 00:17:08.860 "w_mbytes_per_sec": 0 00:17:08.860 }, 00:17:08.860 "claimed": false, 00:17:08.861 "zoned": false, 00:17:08.861 "supported_io_types": { 00:17:08.861 "read": true, 00:17:08.861 "write": true, 00:17:08.861 "unmap": false, 00:17:08.861 "flush": false, 00:17:08.861 "reset": true, 00:17:08.861 "nvme_admin": false, 00:17:08.861 "nvme_io": false, 00:17:08.861 "nvme_io_md": false, 00:17:08.861 "write_zeroes": true, 00:17:08.861 "zcopy": false, 00:17:08.861 "get_zone_info": false, 00:17:08.861 "zone_management": false, 00:17:08.861 "zone_append": false, 00:17:08.861 "compare": false, 00:17:08.861 "compare_and_write": false, 00:17:08.861 "abort": false, 00:17:08.861 "seek_hole": false, 00:17:08.861 "seek_data": false, 00:17:08.861 "copy": false, 00:17:08.861 "nvme_iov_md": false 00:17:08.861 }, 00:17:08.861 "memory_domains": [ 00:17:08.861 { 00:17:08.861 "dma_device_id": "system", 00:17:08.861 "dma_device_type": 1 00:17:08.861 }, 00:17:08.861 { 00:17:08.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.861 "dma_device_type": 2 00:17:08.861 }, 00:17:08.861 { 00:17:08.861 "dma_device_id": "system", 00:17:08.861 "dma_device_type": 1 00:17:08.861 }, 00:17:08.861 { 00:17:08.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.861 "dma_device_type": 2 00:17:08.861 } 00:17:08.861 ], 00:17:08.861 "driver_specific": { 00:17:08.861 "raid": { 00:17:08.861 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:08.861 "strip_size_kb": 0, 00:17:08.861 "state": "online", 00:17:08.861 "raid_level": "raid1", 00:17:08.861 "superblock": true, 00:17:08.861 "num_base_bdevs": 2, 00:17:08.861 "num_base_bdevs_discovered": 2, 00:17:08.861 "num_base_bdevs_operational": 2, 00:17:08.861 "base_bdevs_list": [ 00:17:08.861 { 00:17:08.861 "name": "pt1", 00:17:08.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.861 "is_configured": true, 00:17:08.861 "data_offset": 2048, 00:17:08.861 "data_size": 63488 00:17:08.861 }, 00:17:08.861 { 00:17:08.861 "name": "pt2", 00:17:08.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.861 "is_configured": true, 00:17:08.861 "data_offset": 2048, 00:17:08.861 "data_size": 63488 00:17:08.861 } 00:17:08.861 ] 00:17:08.861 } 00:17:08.861 } 00:17:08.861 }' 00:17:08.861 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:09.120 pt2' 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.120 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:09.120 [2024-11-20 05:27:40.796012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 497a15ed-4521-415c-9d01-9badd0819e3c '!=' 497a15ed-4521-415c-9d01-9badd0819e3c ']' 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.121 [2024-11-20 05:27:40.823829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.121 "name": "raid_bdev1", 00:17:09.121 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:09.121 "strip_size_kb": 0, 00:17:09.121 "state": "online", 00:17:09.121 "raid_level": "raid1", 00:17:09.121 "superblock": true, 00:17:09.121 "num_base_bdevs": 2, 00:17:09.121 "num_base_bdevs_discovered": 1, 00:17:09.121 "num_base_bdevs_operational": 1, 00:17:09.121 "base_bdevs_list": [ 00:17:09.121 { 00:17:09.121 "name": null, 00:17:09.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.121 "is_configured": false, 00:17:09.121 "data_offset": 0, 00:17:09.121 "data_size": 63488 00:17:09.121 }, 00:17:09.121 { 00:17:09.121 "name": "pt2", 00:17:09.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.121 "is_configured": true, 00:17:09.121 "data_offset": 2048, 00:17:09.121 "data_size": 63488 00:17:09.121 } 00:17:09.121 ] 00:17:09.121 }' 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.121 05:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.379 [2024-11-20 05:27:41.143850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.379 [2024-11-20 05:27:41.143882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.379 [2024-11-20 05:27:41.143958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.379 [2024-11-20 05:27:41.144002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.379 [2024-11-20 05:27:41.144012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.379 [2024-11-20 05:27:41.199837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.379 [2024-11-20 05:27:41.199902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.379 [2024-11-20 05:27:41.199919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:09.379 [2024-11-20 05:27:41.199928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.379 [2024-11-20 05:27:41.201943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.379 [2024-11-20 05:27:41.201979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.379 [2024-11-20 05:27:41.202056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:09.379 [2024-11-20 05:27:41.202096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.379 [2024-11-20 05:27:41.202180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:09.379 [2024-11-20 05:27:41.202191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:09.379 [2024-11-20 05:27:41.202414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:09.379 [2024-11-20 05:27:41.202568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:09.379 [2024-11-20 05:27:41.202577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:09.379 [2024-11-20 05:27:41.202697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.379 pt2 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.379 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.380 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.380 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.380 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.380 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.380 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.380 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.638 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.639 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.639 "name": "raid_bdev1", 00:17:09.639 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:09.639 "strip_size_kb": 0, 00:17:09.639 "state": "online", 00:17:09.639 "raid_level": "raid1", 00:17:09.639 "superblock": true, 00:17:09.639 "num_base_bdevs": 2, 00:17:09.639 "num_base_bdevs_discovered": 1, 00:17:09.639 "num_base_bdevs_operational": 1, 00:17:09.639 "base_bdevs_list": [ 00:17:09.639 { 00:17:09.639 "name": null, 00:17:09.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.639 "is_configured": false, 00:17:09.639 "data_offset": 2048, 00:17:09.639 "data_size": 63488 00:17:09.639 }, 00:17:09.639 { 00:17:09.639 "name": "pt2", 00:17:09.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.639 "is_configured": true, 00:17:09.639 "data_offset": 2048, 00:17:09.639 "data_size": 63488 00:17:09.639 } 00:17:09.639 ] 00:17:09.639 }' 00:17:09.639 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.639 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.898 [2024-11-20 05:27:41.547890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.898 [2024-11-20 05:27:41.547928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.898 [2024-11-20 05:27:41.547997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.898 [2024-11-20 05:27:41.548046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.898 [2024-11-20 05:27:41.548055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.898 [2024-11-20 05:27:41.591931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:09.898 [2024-11-20 05:27:41.592005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.898 [2024-11-20 05:27:41.592024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:09.898 [2024-11-20 05:27:41.592032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.898 [2024-11-20 05:27:41.594125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.898 [2024-11-20 05:27:41.594320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:09.898 [2024-11-20 05:27:41.594441] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:09.898 [2024-11-20 05:27:41.594486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:09.898 [2024-11-20 05:27:41.594626] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:09.898 [2024-11-20 05:27:41.594635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.898 [2024-11-20 05:27:41.594649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:09.898 [2024-11-20 05:27:41.594693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.898 [2024-11-20 05:27:41.594759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:09.898 [2024-11-20 05:27:41.594767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:09.898 [2024-11-20 05:27:41.595011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:09.898 [2024-11-20 05:27:41.595122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:09.898 [2024-11-20 05:27:41.595131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:09.898 [2024-11-20 05:27:41.595245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.898 pt1 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.898 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.898 "name": "raid_bdev1", 00:17:09.898 "uuid": "497a15ed-4521-415c-9d01-9badd0819e3c", 00:17:09.898 "strip_size_kb": 0, 00:17:09.898 "state": "online", 00:17:09.899 "raid_level": "raid1", 00:17:09.899 "superblock": true, 00:17:09.899 "num_base_bdevs": 2, 00:17:09.899 "num_base_bdevs_discovered": 1, 00:17:09.899 "num_base_bdevs_operational": 1, 00:17:09.899 "base_bdevs_list": [ 00:17:09.899 { 00:17:09.899 "name": null, 00:17:09.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.899 "is_configured": false, 00:17:09.899 "data_offset": 2048, 00:17:09.899 "data_size": 63488 00:17:09.899 }, 00:17:09.899 { 00:17:09.899 "name": "pt2", 00:17:09.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.899 "is_configured": true, 00:17:09.899 "data_offset": 2048, 00:17:09.899 "data_size": 63488 00:17:09.899 } 00:17:09.899 ] 00:17:09.899 }' 00:17:09.899 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.899 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:10.158 [2024-11-20 05:27:41.964214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.158 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.515 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 497a15ed-4521-415c-9d01-9badd0819e3c '!=' 497a15ed-4521-415c-9d01-9badd0819e3c ']' 00:17:10.515 05:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61878 00:17:10.515 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61878 ']' 00:17:10.515 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61878 00:17:10.515 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:10.515 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:10.515 05:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61878 00:17:10.515 killing process with pid 61878 00:17:10.515 05:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:10.515 05:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:10.515 05:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61878' 00:17:10.515 05:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61878 00:17:10.515 [2024-11-20 05:27:42.018550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.515 05:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61878 00:17:10.515 [2024-11-20 05:27:42.018654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.515 [2024-11-20 05:27:42.018700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.515 [2024-11-20 05:27:42.018717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:10.515 [2024-11-20 05:27:42.128289] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.082 ************************************ 00:17:11.082 END TEST raid_superblock_test 00:17:11.082 ************************************ 00:17:11.082 05:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:11.082 00:17:11.082 real 0m4.421s 00:17:11.082 user 0m6.767s 00:17:11.082 sys 0m0.727s 00:17:11.082 05:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:11.082 05:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.082 05:27:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:17:11.082 05:27:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:11.082 05:27:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:11.082 05:27:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.082 ************************************ 00:17:11.082 START TEST raid_read_error_test 00:17:11.082 ************************************ 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:11.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yfFIAJ9M3n 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62192 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62192 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62192 ']' 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:11.082 05:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.082 [2024-11-20 05:27:42.873523] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:11.082 [2024-11-20 05:27:42.873649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62192 ] 00:17:11.341 [2024-11-20 05:27:43.030505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.341 [2024-11-20 05:27:43.133909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.600 [2024-11-20 05:27:43.255571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.600 [2024-11-20 05:27:43.255623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.165 BaseBdev1_malloc 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.165 true 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.165 [2024-11-20 05:27:43.735918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:12.165 [2024-11-20 05:27:43.736116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.165 [2024-11-20 05:27:43.736144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:12.165 [2024-11-20 05:27:43.736154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.165 [2024-11-20 05:27:43.738153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.165 [2024-11-20 05:27:43.738197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:12.165 BaseBdev1 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.165 BaseBdev2_malloc 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.165 true 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.165 [2024-11-20 05:27:43.778382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:12.165 [2024-11-20 05:27:43.778584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.165 [2024-11-20 05:27:43.778608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:12.165 [2024-11-20 05:27:43.778618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.165 [2024-11-20 05:27:43.780628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.165 [2024-11-20 05:27:43.780664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:12.165 BaseBdev2 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.165 [2024-11-20 05:27:43.786438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.165 [2024-11-20 05:27:43.788140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.165 [2024-11-20 05:27:43.788329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:12.165 [2024-11-20 05:27:43.788340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:12.165 [2024-11-20 05:27:43.788589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:12.165 [2024-11-20 05:27:43.788735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:12.165 [2024-11-20 05:27:43.788743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:12.165 [2024-11-20 05:27:43.788886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:12.165 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.166 "name": "raid_bdev1", 00:17:12.166 "uuid": "78507116-8bc8-48da-814a-aee9aae886fc", 00:17:12.166 "strip_size_kb": 0, 00:17:12.166 "state": "online", 00:17:12.166 "raid_level": "raid1", 00:17:12.166 "superblock": true, 00:17:12.166 "num_base_bdevs": 2, 00:17:12.166 "num_base_bdevs_discovered": 2, 00:17:12.166 "num_base_bdevs_operational": 2, 00:17:12.166 "base_bdevs_list": [ 00:17:12.166 { 00:17:12.166 "name": "BaseBdev1", 00:17:12.166 "uuid": "b42f1f13-ee80-513a-baa0-af1671b3ab05", 00:17:12.166 "is_configured": true, 00:17:12.166 "data_offset": 2048, 00:17:12.166 "data_size": 63488 00:17:12.166 }, 00:17:12.166 { 00:17:12.166 "name": "BaseBdev2", 00:17:12.166 "uuid": "b8842eab-d509-5de8-bb59-31d71f2a80ea", 00:17:12.166 "is_configured": true, 00:17:12.166 "data_offset": 2048, 00:17:12.166 "data_size": 63488 00:17:12.166 } 00:17:12.166 ] 00:17:12.166 }' 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.166 05:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.424 05:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:12.424 05:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:12.424 [2024-11-20 05:27:44.187396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.378 "name": "raid_bdev1", 00:17:13.378 "uuid": "78507116-8bc8-48da-814a-aee9aae886fc", 00:17:13.378 "strip_size_kb": 0, 00:17:13.378 "state": "online", 00:17:13.378 "raid_level": "raid1", 00:17:13.378 "superblock": true, 00:17:13.378 "num_base_bdevs": 2, 00:17:13.378 "num_base_bdevs_discovered": 2, 00:17:13.378 "num_base_bdevs_operational": 2, 00:17:13.378 "base_bdevs_list": [ 00:17:13.378 { 00:17:13.378 "name": "BaseBdev1", 00:17:13.378 "uuid": "b42f1f13-ee80-513a-baa0-af1671b3ab05", 00:17:13.378 "is_configured": true, 00:17:13.378 "data_offset": 2048, 00:17:13.378 "data_size": 63488 00:17:13.378 }, 00:17:13.378 { 00:17:13.378 "name": "BaseBdev2", 00:17:13.378 "uuid": "b8842eab-d509-5de8-bb59-31d71f2a80ea", 00:17:13.378 "is_configured": true, 00:17:13.378 "data_offset": 2048, 00:17:13.378 "data_size": 63488 00:17:13.378 } 00:17:13.378 ] 00:17:13.378 }' 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.378 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.637 [2024-11-20 05:27:45.440373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.637 [2024-11-20 05:27:45.440409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.637 [2024-11-20 05:27:45.442850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.637 [2024-11-20 05:27:45.442893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.637 [2024-11-20 05:27:45.442973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.637 [2024-11-20 05:27:45.442984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.637 { 00:17:13.637 "results": [ 00:17:13.637 { 00:17:13.637 "job": "raid_bdev1", 00:17:13.637 "core_mask": "0x1", 00:17:13.637 "workload": "randrw", 00:17:13.637 "percentage": 50, 00:17:13.637 "status": "finished", 00:17:13.637 "queue_depth": 1, 00:17:13.637 "io_size": 131072, 00:17:13.637 "runtime": 1.251212, 00:17:13.637 "iops": 19318.86842517495, 00:17:13.637 "mibps": 2414.858553146869, 00:17:13.637 "io_failed": 0, 00:17:13.637 "io_timeout": 0, 00:17:13.637 "avg_latency_us": 49.26192237681233, 00:17:13.637 "min_latency_us": 22.744615384615386, 00:17:13.637 "max_latency_us": 1405.2430769230768 00:17:13.637 } 00:17:13.637 ], 00:17:13.637 "core_count": 1 00:17:13.637 } 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62192 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62192 ']' 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62192 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:13.637 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62192 00:17:13.895 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:13.895 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:13.895 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62192' 00:17:13.895 killing process with pid 62192 00:17:13.895 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62192 00:17:13.895 [2024-11-20 05:27:45.475666] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.895 05:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62192 00:17:13.895 [2024-11-20 05:27:45.546769] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yfFIAJ9M3n 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:14.460 ************************************ 00:17:14.460 END TEST raid_read_error_test 00:17:14.460 ************************************ 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:14.460 00:17:14.460 real 0m3.392s 00:17:14.460 user 0m4.018s 00:17:14.460 sys 0m0.437s 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:14.460 05:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.460 05:27:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:17:14.460 05:27:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:14.460 05:27:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:14.460 05:27:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.460 ************************************ 00:17:14.460 START TEST raid_write_error_test 00:17:14.460 ************************************ 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:14.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SNaQdGP4c2 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62326 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62326 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62326 ']' 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.460 05:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:14.717 [2024-11-20 05:27:46.312210] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:14.717 [2024-11-20 05:27:46.312575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62326 ] 00:17:14.717 [2024-11-20 05:27:46.472640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.974 [2024-11-20 05:27:46.596818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.974 [2024-11-20 05:27:46.749023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.974 [2024-11-20 05:27:46.749084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:15.540 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:15.540 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:15.540 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:15.540 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.541 BaseBdev1_malloc 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.541 true 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.541 [2024-11-20 05:27:47.185198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:15.541 [2024-11-20 05:27:47.185261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.541 [2024-11-20 05:27:47.185283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:15.541 [2024-11-20 05:27:47.185295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.541 [2024-11-20 05:27:47.187642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.541 [2024-11-20 05:27:47.187685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:15.541 BaseBdev1 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.541 BaseBdev2_malloc 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.541 true 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.541 [2024-11-20 05:27:47.231713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:15.541 [2024-11-20 05:27:47.231777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.541 [2024-11-20 05:27:47.231814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:15.541 [2024-11-20 05:27:47.231826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.541 [2024-11-20 05:27:47.234173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.541 [2024-11-20 05:27:47.234216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:15.541 BaseBdev2 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.541 [2024-11-20 05:27:47.239775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.541 [2024-11-20 05:27:47.241833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.541 [2024-11-20 05:27:47.242057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:15.541 [2024-11-20 05:27:47.242071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:15.541 [2024-11-20 05:27:47.242383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:15.541 [2024-11-20 05:27:47.242559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:15.541 [2024-11-20 05:27:47.242568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:15.541 [2024-11-20 05:27:47.242742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.541 "name": "raid_bdev1", 00:17:15.541 "uuid": "f6b8e67c-ddb3-402a-be56-8cc621d40655", 00:17:15.541 "strip_size_kb": 0, 00:17:15.541 "state": "online", 00:17:15.541 "raid_level": "raid1", 00:17:15.541 "superblock": true, 00:17:15.541 "num_base_bdevs": 2, 00:17:15.541 "num_base_bdevs_discovered": 2, 00:17:15.541 "num_base_bdevs_operational": 2, 00:17:15.541 "base_bdevs_list": [ 00:17:15.541 { 00:17:15.541 "name": "BaseBdev1", 00:17:15.541 "uuid": "be8501a9-88c1-57fd-9dda-128dbb1355f1", 00:17:15.541 "is_configured": true, 00:17:15.541 "data_offset": 2048, 00:17:15.542 "data_size": 63488 00:17:15.542 }, 00:17:15.542 { 00:17:15.542 "name": "BaseBdev2", 00:17:15.542 "uuid": "a5b7608a-db25-508e-b180-6f48c18eef3a", 00:17:15.542 "is_configured": true, 00:17:15.542 "data_offset": 2048, 00:17:15.542 "data_size": 63488 00:17:15.542 } 00:17:15.542 ] 00:17:15.542 }' 00:17:15.542 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.542 05:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.800 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:15.800 05:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:16.058 [2024-11-20 05:27:47.684936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.992 [2024-11-20 05:27:48.591479] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:16.992 [2024-11-20 05:27:48.591548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.992 [2024-11-20 05:27:48.591751] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.992 "name": "raid_bdev1", 00:17:16.992 "uuid": "f6b8e67c-ddb3-402a-be56-8cc621d40655", 00:17:16.992 "strip_size_kb": 0, 00:17:16.992 "state": "online", 00:17:16.992 "raid_level": "raid1", 00:17:16.992 "superblock": true, 00:17:16.992 "num_base_bdevs": 2, 00:17:16.992 "num_base_bdevs_discovered": 1, 00:17:16.992 "num_base_bdevs_operational": 1, 00:17:16.992 "base_bdevs_list": [ 00:17:16.992 { 00:17:16.992 "name": null, 00:17:16.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.992 "is_configured": false, 00:17:16.992 "data_offset": 0, 00:17:16.992 "data_size": 63488 00:17:16.992 }, 00:17:16.992 { 00:17:16.992 "name": "BaseBdev2", 00:17:16.992 "uuid": "a5b7608a-db25-508e-b180-6f48c18eef3a", 00:17:16.992 "is_configured": true, 00:17:16.992 "data_offset": 2048, 00:17:16.992 "data_size": 63488 00:17:16.992 } 00:17:16.992 ] 00:17:16.992 }' 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.992 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.253 [2024-11-20 05:27:48.926312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.253 [2024-11-20 05:27:48.926373] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.253 [2024-11-20 05:27:48.929438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.253 [2024-11-20 05:27:48.929497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.253 [2024-11-20 05:27:48.929561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.253 [2024-11-20 05:27:48.929572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:17.253 { 00:17:17.253 "results": [ 00:17:17.253 { 00:17:17.253 "job": "raid_bdev1", 00:17:17.253 "core_mask": "0x1", 00:17:17.253 "workload": "randrw", 00:17:17.253 "percentage": 50, 00:17:17.253 "status": "finished", 00:17:17.253 "queue_depth": 1, 00:17:17.253 "io_size": 131072, 00:17:17.253 "runtime": 1.239371, 00:17:17.253 "iops": 17450.787536581058, 00:17:17.253 "mibps": 2181.348442072632, 00:17:17.253 "io_failed": 0, 00:17:17.253 "io_timeout": 0, 00:17:17.253 "avg_latency_us": 54.17072683558351, 00:17:17.253 "min_latency_us": 28.553846153846155, 00:17:17.253 "max_latency_us": 1688.8123076923077 00:17:17.253 } 00:17:17.253 ], 00:17:17.253 "core_count": 1 00:17:17.253 } 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62326 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62326 ']' 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62326 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62326 00:17:17.253 killing process with pid 62326 00:17:17.253 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:17.254 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:17.254 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62326' 00:17:17.254 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62326 00:17:17.254 05:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62326 00:17:17.254 [2024-11-20 05:27:48.959661] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.254 [2024-11-20 05:27:49.051383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SNaQdGP4c2 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:18.189 ************************************ 00:17:18.189 END TEST raid_write_error_test 00:17:18.189 ************************************ 00:17:18.189 00:17:18.189 real 0m3.628s 00:17:18.189 user 0m4.287s 00:17:18.189 sys 0m0.460s 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:18.189 05:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.189 05:27:49 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:17:18.189 05:27:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:18.189 05:27:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:17:18.189 05:27:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:18.189 05:27:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.189 05:27:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.189 ************************************ 00:17:18.189 START TEST raid_state_function_test 00:17:18.189 ************************************ 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:18.189 Process raid pid: 62459 00:17:18.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62459 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62459' 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62459 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62459 ']' 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.189 05:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:18.189 [2024-11-20 05:27:49.982123] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:18.189 [2024-11-20 05:27:49.982534] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.448 [2024-11-20 05:27:50.144136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.448 [2024-11-20 05:27:50.267432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.707 [2024-11-20 05:27:50.416885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.707 [2024-11-20 05:27:50.416940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.273 [2024-11-20 05:27:50.905923] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.273 [2024-11-20 05:27:50.905982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.273 [2024-11-20 05:27:50.905993] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.273 [2024-11-20 05:27:50.906003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.273 [2024-11-20 05:27:50.906009] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.273 [2024-11-20 05:27:50.906018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.273 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.274 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.274 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.274 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.274 05:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.274 05:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.274 05:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.274 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.274 "name": "Existed_Raid", 00:17:19.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.274 "strip_size_kb": 64, 00:17:19.274 "state": "configuring", 00:17:19.274 "raid_level": "raid0", 00:17:19.274 "superblock": false, 00:17:19.274 "num_base_bdevs": 3, 00:17:19.274 "num_base_bdevs_discovered": 0, 00:17:19.274 "num_base_bdevs_operational": 3, 00:17:19.274 "base_bdevs_list": [ 00:17:19.274 { 00:17:19.274 "name": "BaseBdev1", 00:17:19.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.274 "is_configured": false, 00:17:19.274 "data_offset": 0, 00:17:19.274 "data_size": 0 00:17:19.274 }, 00:17:19.274 { 00:17:19.274 "name": "BaseBdev2", 00:17:19.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.274 "is_configured": false, 00:17:19.274 "data_offset": 0, 00:17:19.274 "data_size": 0 00:17:19.274 }, 00:17:19.274 { 00:17:19.274 "name": "BaseBdev3", 00:17:19.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.274 "is_configured": false, 00:17:19.274 "data_offset": 0, 00:17:19.274 "data_size": 0 00:17:19.274 } 00:17:19.274 ] 00:17:19.274 }' 00:17:19.274 05:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.274 05:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.533 [2024-11-20 05:27:51.237979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.533 [2024-11-20 05:27:51.238034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.533 [2024-11-20 05:27:51.245992] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.533 [2024-11-20 05:27:51.246058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.533 [2024-11-20 05:27:51.246068] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.533 [2024-11-20 05:27:51.246078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.533 [2024-11-20 05:27:51.246084] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.533 [2024-11-20 05:27:51.246094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.533 [2024-11-20 05:27:51.281816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.533 BaseBdev1 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.533 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.533 [ 00:17:19.533 { 00:17:19.533 "name": "BaseBdev1", 00:17:19.533 "aliases": [ 00:17:19.533 "a622e919-a25e-4834-9a2e-669ffc07c3e2" 00:17:19.533 ], 00:17:19.533 "product_name": "Malloc disk", 00:17:19.533 "block_size": 512, 00:17:19.533 "num_blocks": 65536, 00:17:19.534 "uuid": "a622e919-a25e-4834-9a2e-669ffc07c3e2", 00:17:19.534 "assigned_rate_limits": { 00:17:19.534 "rw_ios_per_sec": 0, 00:17:19.534 "rw_mbytes_per_sec": 0, 00:17:19.534 "r_mbytes_per_sec": 0, 00:17:19.534 "w_mbytes_per_sec": 0 00:17:19.534 }, 00:17:19.534 "claimed": true, 00:17:19.534 "claim_type": "exclusive_write", 00:17:19.534 "zoned": false, 00:17:19.534 "supported_io_types": { 00:17:19.534 "read": true, 00:17:19.534 "write": true, 00:17:19.534 "unmap": true, 00:17:19.534 "flush": true, 00:17:19.534 "reset": true, 00:17:19.534 "nvme_admin": false, 00:17:19.534 "nvme_io": false, 00:17:19.534 "nvme_io_md": false, 00:17:19.534 "write_zeroes": true, 00:17:19.534 "zcopy": true, 00:17:19.534 "get_zone_info": false, 00:17:19.534 "zone_management": false, 00:17:19.534 "zone_append": false, 00:17:19.534 "compare": false, 00:17:19.534 "compare_and_write": false, 00:17:19.534 "abort": true, 00:17:19.534 "seek_hole": false, 00:17:19.534 "seek_data": false, 00:17:19.534 "copy": true, 00:17:19.534 "nvme_iov_md": false 00:17:19.534 }, 00:17:19.534 "memory_domains": [ 00:17:19.534 { 00:17:19.534 "dma_device_id": "system", 00:17:19.534 "dma_device_type": 1 00:17:19.534 }, 00:17:19.534 { 00:17:19.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.534 "dma_device_type": 2 00:17:19.534 } 00:17:19.534 ], 00:17:19.534 "driver_specific": {} 00:17:19.534 } 00:17:19.534 ] 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.534 "name": "Existed_Raid", 00:17:19.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.534 "strip_size_kb": 64, 00:17:19.534 "state": "configuring", 00:17:19.534 "raid_level": "raid0", 00:17:19.534 "superblock": false, 00:17:19.534 "num_base_bdevs": 3, 00:17:19.534 "num_base_bdevs_discovered": 1, 00:17:19.534 "num_base_bdevs_operational": 3, 00:17:19.534 "base_bdevs_list": [ 00:17:19.534 { 00:17:19.534 "name": "BaseBdev1", 00:17:19.534 "uuid": "a622e919-a25e-4834-9a2e-669ffc07c3e2", 00:17:19.534 "is_configured": true, 00:17:19.534 "data_offset": 0, 00:17:19.534 "data_size": 65536 00:17:19.534 }, 00:17:19.534 { 00:17:19.534 "name": "BaseBdev2", 00:17:19.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.534 "is_configured": false, 00:17:19.534 "data_offset": 0, 00:17:19.534 "data_size": 0 00:17:19.534 }, 00:17:19.534 { 00:17:19.534 "name": "BaseBdev3", 00:17:19.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.534 "is_configured": false, 00:17:19.534 "data_offset": 0, 00:17:19.534 "data_size": 0 00:17:19.534 } 00:17:19.534 ] 00:17:19.534 }' 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.534 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.102 [2024-11-20 05:27:51.653967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.102 [2024-11-20 05:27:51.654164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.102 [2024-11-20 05:27:51.662039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.102 [2024-11-20 05:27:51.664086] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.102 [2024-11-20 05:27:51.664136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.102 [2024-11-20 05:27:51.664147] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.102 [2024-11-20 05:27:51.664157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.102 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.103 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.103 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.103 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.103 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.103 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.103 "name": "Existed_Raid", 00:17:20.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.103 "strip_size_kb": 64, 00:17:20.103 "state": "configuring", 00:17:20.103 "raid_level": "raid0", 00:17:20.103 "superblock": false, 00:17:20.103 "num_base_bdevs": 3, 00:17:20.103 "num_base_bdevs_discovered": 1, 00:17:20.103 "num_base_bdevs_operational": 3, 00:17:20.103 "base_bdevs_list": [ 00:17:20.103 { 00:17:20.103 "name": "BaseBdev1", 00:17:20.103 "uuid": "a622e919-a25e-4834-9a2e-669ffc07c3e2", 00:17:20.103 "is_configured": true, 00:17:20.103 "data_offset": 0, 00:17:20.103 "data_size": 65536 00:17:20.103 }, 00:17:20.103 { 00:17:20.103 "name": "BaseBdev2", 00:17:20.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.103 "is_configured": false, 00:17:20.103 "data_offset": 0, 00:17:20.103 "data_size": 0 00:17:20.103 }, 00:17:20.103 { 00:17:20.103 "name": "BaseBdev3", 00:17:20.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.103 "is_configured": false, 00:17:20.103 "data_offset": 0, 00:17:20.103 "data_size": 0 00:17:20.103 } 00:17:20.103 ] 00:17:20.103 }' 00:17:20.103 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.103 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.362 05:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:20.362 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.362 05:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.362 [2024-11-20 05:27:52.006844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.362 BaseBdev2 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.362 [ 00:17:20.362 { 00:17:20.362 "name": "BaseBdev2", 00:17:20.362 "aliases": [ 00:17:20.362 "52eb672a-99ba-48df-8952-f79aa1026c65" 00:17:20.362 ], 00:17:20.362 "product_name": "Malloc disk", 00:17:20.362 "block_size": 512, 00:17:20.362 "num_blocks": 65536, 00:17:20.362 "uuid": "52eb672a-99ba-48df-8952-f79aa1026c65", 00:17:20.362 "assigned_rate_limits": { 00:17:20.362 "rw_ios_per_sec": 0, 00:17:20.362 "rw_mbytes_per_sec": 0, 00:17:20.362 "r_mbytes_per_sec": 0, 00:17:20.362 "w_mbytes_per_sec": 0 00:17:20.362 }, 00:17:20.362 "claimed": true, 00:17:20.362 "claim_type": "exclusive_write", 00:17:20.362 "zoned": false, 00:17:20.362 "supported_io_types": { 00:17:20.362 "read": true, 00:17:20.362 "write": true, 00:17:20.362 "unmap": true, 00:17:20.362 "flush": true, 00:17:20.362 "reset": true, 00:17:20.362 "nvme_admin": false, 00:17:20.362 "nvme_io": false, 00:17:20.362 "nvme_io_md": false, 00:17:20.362 "write_zeroes": true, 00:17:20.362 "zcopy": true, 00:17:20.362 "get_zone_info": false, 00:17:20.362 "zone_management": false, 00:17:20.362 "zone_append": false, 00:17:20.362 "compare": false, 00:17:20.362 "compare_and_write": false, 00:17:20.362 "abort": true, 00:17:20.362 "seek_hole": false, 00:17:20.362 "seek_data": false, 00:17:20.362 "copy": true, 00:17:20.362 "nvme_iov_md": false 00:17:20.362 }, 00:17:20.362 "memory_domains": [ 00:17:20.362 { 00:17:20.362 "dma_device_id": "system", 00:17:20.362 "dma_device_type": 1 00:17:20.362 }, 00:17:20.362 { 00:17:20.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.362 "dma_device_type": 2 00:17:20.362 } 00:17:20.362 ], 00:17:20.362 "driver_specific": {} 00:17:20.362 } 00:17:20.362 ] 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.362 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.362 "name": "Existed_Raid", 00:17:20.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.362 "strip_size_kb": 64, 00:17:20.362 "state": "configuring", 00:17:20.363 "raid_level": "raid0", 00:17:20.363 "superblock": false, 00:17:20.363 "num_base_bdevs": 3, 00:17:20.363 "num_base_bdevs_discovered": 2, 00:17:20.363 "num_base_bdevs_operational": 3, 00:17:20.363 "base_bdevs_list": [ 00:17:20.363 { 00:17:20.363 "name": "BaseBdev1", 00:17:20.363 "uuid": "a622e919-a25e-4834-9a2e-669ffc07c3e2", 00:17:20.363 "is_configured": true, 00:17:20.363 "data_offset": 0, 00:17:20.363 "data_size": 65536 00:17:20.363 }, 00:17:20.363 { 00:17:20.363 "name": "BaseBdev2", 00:17:20.363 "uuid": "52eb672a-99ba-48df-8952-f79aa1026c65", 00:17:20.363 "is_configured": true, 00:17:20.363 "data_offset": 0, 00:17:20.363 "data_size": 65536 00:17:20.363 }, 00:17:20.363 { 00:17:20.363 "name": "BaseBdev3", 00:17:20.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.363 "is_configured": false, 00:17:20.363 "data_offset": 0, 00:17:20.363 "data_size": 0 00:17:20.363 } 00:17:20.363 ] 00:17:20.363 }' 00:17:20.363 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.363 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.622 [2024-11-20 05:27:52.389985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.622 [2024-11-20 05:27:52.390049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:20.622 [2024-11-20 05:27:52.390065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:20.622 [2024-11-20 05:27:52.390340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:20.622 [2024-11-20 05:27:52.390526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:20.622 [2024-11-20 05:27:52.390537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:20.622 [2024-11-20 05:27:52.390809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.622 BaseBdev3 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.622 [ 00:17:20.622 { 00:17:20.622 "name": "BaseBdev3", 00:17:20.622 "aliases": [ 00:17:20.622 "a25c4794-1603-4fc8-bf68-bcdb51cf0c5e" 00:17:20.622 ], 00:17:20.622 "product_name": "Malloc disk", 00:17:20.622 "block_size": 512, 00:17:20.622 "num_blocks": 65536, 00:17:20.622 "uuid": "a25c4794-1603-4fc8-bf68-bcdb51cf0c5e", 00:17:20.622 "assigned_rate_limits": { 00:17:20.622 "rw_ios_per_sec": 0, 00:17:20.622 "rw_mbytes_per_sec": 0, 00:17:20.622 "r_mbytes_per_sec": 0, 00:17:20.622 "w_mbytes_per_sec": 0 00:17:20.622 }, 00:17:20.622 "claimed": true, 00:17:20.622 "claim_type": "exclusive_write", 00:17:20.622 "zoned": false, 00:17:20.622 "supported_io_types": { 00:17:20.622 "read": true, 00:17:20.622 "write": true, 00:17:20.622 "unmap": true, 00:17:20.622 "flush": true, 00:17:20.622 "reset": true, 00:17:20.622 "nvme_admin": false, 00:17:20.622 "nvme_io": false, 00:17:20.622 "nvme_io_md": false, 00:17:20.622 "write_zeroes": true, 00:17:20.622 "zcopy": true, 00:17:20.622 "get_zone_info": false, 00:17:20.622 "zone_management": false, 00:17:20.622 "zone_append": false, 00:17:20.622 "compare": false, 00:17:20.622 "compare_and_write": false, 00:17:20.622 "abort": true, 00:17:20.622 "seek_hole": false, 00:17:20.622 "seek_data": false, 00:17:20.622 "copy": true, 00:17:20.622 "nvme_iov_md": false 00:17:20.622 }, 00:17:20.622 "memory_domains": [ 00:17:20.622 { 00:17:20.622 "dma_device_id": "system", 00:17:20.622 "dma_device_type": 1 00:17:20.622 }, 00:17:20.622 { 00:17:20.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.622 "dma_device_type": 2 00:17:20.622 } 00:17:20.622 ], 00:17:20.622 "driver_specific": {} 00:17:20.622 } 00:17:20.622 ] 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.622 "name": "Existed_Raid", 00:17:20.622 "uuid": "bac4a346-64e0-4ff9-8c9d-e4bf250f5698", 00:17:20.622 "strip_size_kb": 64, 00:17:20.622 "state": "online", 00:17:20.622 "raid_level": "raid0", 00:17:20.622 "superblock": false, 00:17:20.622 "num_base_bdevs": 3, 00:17:20.622 "num_base_bdevs_discovered": 3, 00:17:20.622 "num_base_bdevs_operational": 3, 00:17:20.622 "base_bdevs_list": [ 00:17:20.622 { 00:17:20.622 "name": "BaseBdev1", 00:17:20.622 "uuid": "a622e919-a25e-4834-9a2e-669ffc07c3e2", 00:17:20.622 "is_configured": true, 00:17:20.622 "data_offset": 0, 00:17:20.622 "data_size": 65536 00:17:20.622 }, 00:17:20.622 { 00:17:20.622 "name": "BaseBdev2", 00:17:20.622 "uuid": "52eb672a-99ba-48df-8952-f79aa1026c65", 00:17:20.622 "is_configured": true, 00:17:20.622 "data_offset": 0, 00:17:20.622 "data_size": 65536 00:17:20.622 }, 00:17:20.622 { 00:17:20.622 "name": "BaseBdev3", 00:17:20.622 "uuid": "a25c4794-1603-4fc8-bf68-bcdb51cf0c5e", 00:17:20.622 "is_configured": true, 00:17:20.622 "data_offset": 0, 00:17:20.622 "data_size": 65536 00:17:20.622 } 00:17:20.622 ] 00:17:20.622 }' 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.622 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.208 [2024-11-20 05:27:52.786520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:21.208 "name": "Existed_Raid", 00:17:21.208 "aliases": [ 00:17:21.208 "bac4a346-64e0-4ff9-8c9d-e4bf250f5698" 00:17:21.208 ], 00:17:21.208 "product_name": "Raid Volume", 00:17:21.208 "block_size": 512, 00:17:21.208 "num_blocks": 196608, 00:17:21.208 "uuid": "bac4a346-64e0-4ff9-8c9d-e4bf250f5698", 00:17:21.208 "assigned_rate_limits": { 00:17:21.208 "rw_ios_per_sec": 0, 00:17:21.208 "rw_mbytes_per_sec": 0, 00:17:21.208 "r_mbytes_per_sec": 0, 00:17:21.208 "w_mbytes_per_sec": 0 00:17:21.208 }, 00:17:21.208 "claimed": false, 00:17:21.208 "zoned": false, 00:17:21.208 "supported_io_types": { 00:17:21.208 "read": true, 00:17:21.208 "write": true, 00:17:21.208 "unmap": true, 00:17:21.208 "flush": true, 00:17:21.208 "reset": true, 00:17:21.208 "nvme_admin": false, 00:17:21.208 "nvme_io": false, 00:17:21.208 "nvme_io_md": false, 00:17:21.208 "write_zeroes": true, 00:17:21.208 "zcopy": false, 00:17:21.208 "get_zone_info": false, 00:17:21.208 "zone_management": false, 00:17:21.208 "zone_append": false, 00:17:21.208 "compare": false, 00:17:21.208 "compare_and_write": false, 00:17:21.208 "abort": false, 00:17:21.208 "seek_hole": false, 00:17:21.208 "seek_data": false, 00:17:21.208 "copy": false, 00:17:21.208 "nvme_iov_md": false 00:17:21.208 }, 00:17:21.208 "memory_domains": [ 00:17:21.208 { 00:17:21.208 "dma_device_id": "system", 00:17:21.208 "dma_device_type": 1 00:17:21.208 }, 00:17:21.208 { 00:17:21.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.208 "dma_device_type": 2 00:17:21.208 }, 00:17:21.208 { 00:17:21.208 "dma_device_id": "system", 00:17:21.208 "dma_device_type": 1 00:17:21.208 }, 00:17:21.208 { 00:17:21.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.208 "dma_device_type": 2 00:17:21.208 }, 00:17:21.208 { 00:17:21.208 "dma_device_id": "system", 00:17:21.208 "dma_device_type": 1 00:17:21.208 }, 00:17:21.208 { 00:17:21.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.208 "dma_device_type": 2 00:17:21.208 } 00:17:21.208 ], 00:17:21.208 "driver_specific": { 00:17:21.208 "raid": { 00:17:21.208 "uuid": "bac4a346-64e0-4ff9-8c9d-e4bf250f5698", 00:17:21.208 "strip_size_kb": 64, 00:17:21.208 "state": "online", 00:17:21.208 "raid_level": "raid0", 00:17:21.208 "superblock": false, 00:17:21.208 "num_base_bdevs": 3, 00:17:21.208 "num_base_bdevs_discovered": 3, 00:17:21.208 "num_base_bdevs_operational": 3, 00:17:21.208 "base_bdevs_list": [ 00:17:21.208 { 00:17:21.208 "name": "BaseBdev1", 00:17:21.208 "uuid": "a622e919-a25e-4834-9a2e-669ffc07c3e2", 00:17:21.208 "is_configured": true, 00:17:21.208 "data_offset": 0, 00:17:21.208 "data_size": 65536 00:17:21.208 }, 00:17:21.208 { 00:17:21.208 "name": "BaseBdev2", 00:17:21.208 "uuid": "52eb672a-99ba-48df-8952-f79aa1026c65", 00:17:21.208 "is_configured": true, 00:17:21.208 "data_offset": 0, 00:17:21.208 "data_size": 65536 00:17:21.208 }, 00:17:21.208 { 00:17:21.208 "name": "BaseBdev3", 00:17:21.208 "uuid": "a25c4794-1603-4fc8-bf68-bcdb51cf0c5e", 00:17:21.208 "is_configured": true, 00:17:21.208 "data_offset": 0, 00:17:21.208 "data_size": 65536 00:17:21.208 } 00:17:21.208 ] 00:17:21.208 } 00:17:21.208 } 00:17:21.208 }' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:21.208 BaseBdev2 00:17:21.208 BaseBdev3' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.208 05:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.208 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:21.209 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:21.209 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:21.209 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.209 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.209 [2024-11-20 05:27:53.014282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.209 [2024-11-20 05:27:53.014484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.209 [2024-11-20 05:27:53.014561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.466 "name": "Existed_Raid", 00:17:21.466 "uuid": "bac4a346-64e0-4ff9-8c9d-e4bf250f5698", 00:17:21.466 "strip_size_kb": 64, 00:17:21.466 "state": "offline", 00:17:21.466 "raid_level": "raid0", 00:17:21.466 "superblock": false, 00:17:21.466 "num_base_bdevs": 3, 00:17:21.466 "num_base_bdevs_discovered": 2, 00:17:21.466 "num_base_bdevs_operational": 2, 00:17:21.466 "base_bdevs_list": [ 00:17:21.466 { 00:17:21.466 "name": null, 00:17:21.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.466 "is_configured": false, 00:17:21.466 "data_offset": 0, 00:17:21.466 "data_size": 65536 00:17:21.466 }, 00:17:21.466 { 00:17:21.466 "name": "BaseBdev2", 00:17:21.466 "uuid": "52eb672a-99ba-48df-8952-f79aa1026c65", 00:17:21.466 "is_configured": true, 00:17:21.466 "data_offset": 0, 00:17:21.466 "data_size": 65536 00:17:21.466 }, 00:17:21.466 { 00:17:21.466 "name": "BaseBdev3", 00:17:21.466 "uuid": "a25c4794-1603-4fc8-bf68-bcdb51cf0c5e", 00:17:21.466 "is_configured": true, 00:17:21.466 "data_offset": 0, 00:17:21.466 "data_size": 65536 00:17:21.466 } 00:17:21.466 ] 00:17:21.466 }' 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.466 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.724 [2024-11-20 05:27:53.454275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.724 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.982 [2024-11-20 05:27:53.558397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:21.982 [2024-11-20 05:27:53.558466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.982 BaseBdev2 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.982 [ 00:17:21.982 { 00:17:21.982 "name": "BaseBdev2", 00:17:21.982 "aliases": [ 00:17:21.982 "4aa55d74-fc3b-4099-872e-05b99763c67a" 00:17:21.982 ], 00:17:21.982 "product_name": "Malloc disk", 00:17:21.982 "block_size": 512, 00:17:21.982 "num_blocks": 65536, 00:17:21.982 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:21.982 "assigned_rate_limits": { 00:17:21.982 "rw_ios_per_sec": 0, 00:17:21.982 "rw_mbytes_per_sec": 0, 00:17:21.982 "r_mbytes_per_sec": 0, 00:17:21.982 "w_mbytes_per_sec": 0 00:17:21.982 }, 00:17:21.982 "claimed": false, 00:17:21.982 "zoned": false, 00:17:21.982 "supported_io_types": { 00:17:21.982 "read": true, 00:17:21.982 "write": true, 00:17:21.982 "unmap": true, 00:17:21.982 "flush": true, 00:17:21.982 "reset": true, 00:17:21.982 "nvme_admin": false, 00:17:21.982 "nvme_io": false, 00:17:21.982 "nvme_io_md": false, 00:17:21.982 "write_zeroes": true, 00:17:21.982 "zcopy": true, 00:17:21.982 "get_zone_info": false, 00:17:21.982 "zone_management": false, 00:17:21.982 "zone_append": false, 00:17:21.982 "compare": false, 00:17:21.982 "compare_and_write": false, 00:17:21.982 "abort": true, 00:17:21.982 "seek_hole": false, 00:17:21.982 "seek_data": false, 00:17:21.982 "copy": true, 00:17:21.982 "nvme_iov_md": false 00:17:21.982 }, 00:17:21.982 "memory_domains": [ 00:17:21.982 { 00:17:21.982 "dma_device_id": "system", 00:17:21.982 "dma_device_type": 1 00:17:21.982 }, 00:17:21.982 { 00:17:21.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.982 "dma_device_type": 2 00:17:21.982 } 00:17:21.982 ], 00:17:21.982 "driver_specific": {} 00:17:21.982 } 00:17:21.982 ] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.982 BaseBdev3 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.982 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.982 [ 00:17:21.982 { 00:17:21.982 "name": "BaseBdev3", 00:17:21.982 "aliases": [ 00:17:21.982 "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e" 00:17:21.983 ], 00:17:21.983 "product_name": "Malloc disk", 00:17:21.983 "block_size": 512, 00:17:21.983 "num_blocks": 65536, 00:17:21.983 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:21.983 "assigned_rate_limits": { 00:17:21.983 "rw_ios_per_sec": 0, 00:17:21.983 "rw_mbytes_per_sec": 0, 00:17:21.983 "r_mbytes_per_sec": 0, 00:17:21.983 "w_mbytes_per_sec": 0 00:17:21.983 }, 00:17:21.983 "claimed": false, 00:17:21.983 "zoned": false, 00:17:21.983 "supported_io_types": { 00:17:21.983 "read": true, 00:17:21.983 "write": true, 00:17:21.983 "unmap": true, 00:17:21.983 "flush": true, 00:17:21.983 "reset": true, 00:17:21.983 "nvme_admin": false, 00:17:21.983 "nvme_io": false, 00:17:21.983 "nvme_io_md": false, 00:17:21.983 "write_zeroes": true, 00:17:21.983 "zcopy": true, 00:17:21.983 "get_zone_info": false, 00:17:21.983 "zone_management": false, 00:17:21.983 "zone_append": false, 00:17:21.983 "compare": false, 00:17:21.983 "compare_and_write": false, 00:17:21.983 "abort": true, 00:17:21.983 "seek_hole": false, 00:17:21.983 "seek_data": false, 00:17:21.983 "copy": true, 00:17:21.983 "nvme_iov_md": false 00:17:21.983 }, 00:17:21.983 "memory_domains": [ 00:17:21.983 { 00:17:21.983 "dma_device_id": "system", 00:17:21.983 "dma_device_type": 1 00:17:21.983 }, 00:17:21.983 { 00:17:21.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.983 "dma_device_type": 2 00:17:21.983 } 00:17:21.983 ], 00:17:21.983 "driver_specific": {} 00:17:21.983 } 00:17:21.983 ] 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.983 [2024-11-20 05:27:53.776231] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:21.983 [2024-11-20 05:27:53.776447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:21.983 [2024-11-20 05:27:53.776540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:21.983 [2024-11-20 05:27:53.778650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.983 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.240 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.240 "name": "Existed_Raid", 00:17:22.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.240 "strip_size_kb": 64, 00:17:22.240 "state": "configuring", 00:17:22.240 "raid_level": "raid0", 00:17:22.240 "superblock": false, 00:17:22.240 "num_base_bdevs": 3, 00:17:22.240 "num_base_bdevs_discovered": 2, 00:17:22.240 "num_base_bdevs_operational": 3, 00:17:22.240 "base_bdevs_list": [ 00:17:22.240 { 00:17:22.240 "name": "BaseBdev1", 00:17:22.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.240 "is_configured": false, 00:17:22.240 "data_offset": 0, 00:17:22.240 "data_size": 0 00:17:22.240 }, 00:17:22.240 { 00:17:22.240 "name": "BaseBdev2", 00:17:22.240 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:22.240 "is_configured": true, 00:17:22.240 "data_offset": 0, 00:17:22.240 "data_size": 65536 00:17:22.240 }, 00:17:22.240 { 00:17:22.240 "name": "BaseBdev3", 00:17:22.240 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:22.240 "is_configured": true, 00:17:22.240 "data_offset": 0, 00:17:22.240 "data_size": 65536 00:17:22.240 } 00:17:22.240 ] 00:17:22.240 }' 00:17:22.240 05:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.240 05:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.497 [2024-11-20 05:27:54.148281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.497 "name": "Existed_Raid", 00:17:22.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.497 "strip_size_kb": 64, 00:17:22.497 "state": "configuring", 00:17:22.497 "raid_level": "raid0", 00:17:22.497 "superblock": false, 00:17:22.497 "num_base_bdevs": 3, 00:17:22.497 "num_base_bdevs_discovered": 1, 00:17:22.497 "num_base_bdevs_operational": 3, 00:17:22.497 "base_bdevs_list": [ 00:17:22.497 { 00:17:22.497 "name": "BaseBdev1", 00:17:22.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.497 "is_configured": false, 00:17:22.497 "data_offset": 0, 00:17:22.497 "data_size": 0 00:17:22.497 }, 00:17:22.497 { 00:17:22.497 "name": null, 00:17:22.497 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:22.497 "is_configured": false, 00:17:22.497 "data_offset": 0, 00:17:22.497 "data_size": 65536 00:17:22.497 }, 00:17:22.497 { 00:17:22.497 "name": "BaseBdev3", 00:17:22.497 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:22.497 "is_configured": true, 00:17:22.497 "data_offset": 0, 00:17:22.497 "data_size": 65536 00:17:22.497 } 00:17:22.497 ] 00:17:22.497 }' 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.497 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.754 [2024-11-20 05:27:54.509567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.754 BaseBdev1 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.754 [ 00:17:22.754 { 00:17:22.754 "name": "BaseBdev1", 00:17:22.754 "aliases": [ 00:17:22.754 "4cdfb415-ac21-4daf-8a6b-f5a974437a5e" 00:17:22.754 ], 00:17:22.754 "product_name": "Malloc disk", 00:17:22.754 "block_size": 512, 00:17:22.754 "num_blocks": 65536, 00:17:22.754 "uuid": "4cdfb415-ac21-4daf-8a6b-f5a974437a5e", 00:17:22.754 "assigned_rate_limits": { 00:17:22.754 "rw_ios_per_sec": 0, 00:17:22.754 "rw_mbytes_per_sec": 0, 00:17:22.754 "r_mbytes_per_sec": 0, 00:17:22.754 "w_mbytes_per_sec": 0 00:17:22.754 }, 00:17:22.754 "claimed": true, 00:17:22.754 "claim_type": "exclusive_write", 00:17:22.754 "zoned": false, 00:17:22.754 "supported_io_types": { 00:17:22.754 "read": true, 00:17:22.754 "write": true, 00:17:22.754 "unmap": true, 00:17:22.754 "flush": true, 00:17:22.754 "reset": true, 00:17:22.754 "nvme_admin": false, 00:17:22.754 "nvme_io": false, 00:17:22.754 "nvme_io_md": false, 00:17:22.754 "write_zeroes": true, 00:17:22.754 "zcopy": true, 00:17:22.754 "get_zone_info": false, 00:17:22.754 "zone_management": false, 00:17:22.754 "zone_append": false, 00:17:22.754 "compare": false, 00:17:22.754 "compare_and_write": false, 00:17:22.754 "abort": true, 00:17:22.754 "seek_hole": false, 00:17:22.754 "seek_data": false, 00:17:22.754 "copy": true, 00:17:22.754 "nvme_iov_md": false 00:17:22.754 }, 00:17:22.754 "memory_domains": [ 00:17:22.754 { 00:17:22.754 "dma_device_id": "system", 00:17:22.754 "dma_device_type": 1 00:17:22.754 }, 00:17:22.754 { 00:17:22.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.754 "dma_device_type": 2 00:17:22.754 } 00:17:22.754 ], 00:17:22.754 "driver_specific": {} 00:17:22.754 } 00:17:22.754 ] 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.754 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.754 "name": "Existed_Raid", 00:17:22.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.754 "strip_size_kb": 64, 00:17:22.754 "state": "configuring", 00:17:22.754 "raid_level": "raid0", 00:17:22.754 "superblock": false, 00:17:22.754 "num_base_bdevs": 3, 00:17:22.754 "num_base_bdevs_discovered": 2, 00:17:22.755 "num_base_bdevs_operational": 3, 00:17:22.755 "base_bdevs_list": [ 00:17:22.755 { 00:17:22.755 "name": "BaseBdev1", 00:17:22.755 "uuid": "4cdfb415-ac21-4daf-8a6b-f5a974437a5e", 00:17:22.755 "is_configured": true, 00:17:22.755 "data_offset": 0, 00:17:22.755 "data_size": 65536 00:17:22.755 }, 00:17:22.755 { 00:17:22.755 "name": null, 00:17:22.755 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:22.755 "is_configured": false, 00:17:22.755 "data_offset": 0, 00:17:22.755 "data_size": 65536 00:17:22.755 }, 00:17:22.755 { 00:17:22.755 "name": "BaseBdev3", 00:17:22.755 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:22.755 "is_configured": true, 00:17:22.755 "data_offset": 0, 00:17:22.755 "data_size": 65536 00:17:22.755 } 00:17:22.755 ] 00:17:22.755 }' 00:17:22.755 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.755 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.320 [2024-11-20 05:27:54.913706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.320 "name": "Existed_Raid", 00:17:23.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.320 "strip_size_kb": 64, 00:17:23.320 "state": "configuring", 00:17:23.320 "raid_level": "raid0", 00:17:23.320 "superblock": false, 00:17:23.320 "num_base_bdevs": 3, 00:17:23.320 "num_base_bdevs_discovered": 1, 00:17:23.320 "num_base_bdevs_operational": 3, 00:17:23.320 "base_bdevs_list": [ 00:17:23.320 { 00:17:23.320 "name": "BaseBdev1", 00:17:23.320 "uuid": "4cdfb415-ac21-4daf-8a6b-f5a974437a5e", 00:17:23.320 "is_configured": true, 00:17:23.320 "data_offset": 0, 00:17:23.320 "data_size": 65536 00:17:23.320 }, 00:17:23.320 { 00:17:23.320 "name": null, 00:17:23.320 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:23.320 "is_configured": false, 00:17:23.320 "data_offset": 0, 00:17:23.320 "data_size": 65536 00:17:23.320 }, 00:17:23.320 { 00:17:23.320 "name": null, 00:17:23.320 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:23.320 "is_configured": false, 00:17:23.320 "data_offset": 0, 00:17:23.320 "data_size": 65536 00:17:23.320 } 00:17:23.320 ] 00:17:23.320 }' 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.320 05:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.578 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.578 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:23.578 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.578 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.578 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.578 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.579 [2024-11-20 05:27:55.289806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.579 "name": "Existed_Raid", 00:17:23.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.579 "strip_size_kb": 64, 00:17:23.579 "state": "configuring", 00:17:23.579 "raid_level": "raid0", 00:17:23.579 "superblock": false, 00:17:23.579 "num_base_bdevs": 3, 00:17:23.579 "num_base_bdevs_discovered": 2, 00:17:23.579 "num_base_bdevs_operational": 3, 00:17:23.579 "base_bdevs_list": [ 00:17:23.579 { 00:17:23.579 "name": "BaseBdev1", 00:17:23.579 "uuid": "4cdfb415-ac21-4daf-8a6b-f5a974437a5e", 00:17:23.579 "is_configured": true, 00:17:23.579 "data_offset": 0, 00:17:23.579 "data_size": 65536 00:17:23.579 }, 00:17:23.579 { 00:17:23.579 "name": null, 00:17:23.579 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:23.579 "is_configured": false, 00:17:23.579 "data_offset": 0, 00:17:23.579 "data_size": 65536 00:17:23.579 }, 00:17:23.579 { 00:17:23.579 "name": "BaseBdev3", 00:17:23.579 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:23.579 "is_configured": true, 00:17:23.579 "data_offset": 0, 00:17:23.579 "data_size": 65536 00:17:23.579 } 00:17:23.579 ] 00:17:23.579 }' 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.579 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.837 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:23.837 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.837 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.837 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.094 [2024-11-20 05:27:55.701897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.094 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.094 "name": "Existed_Raid", 00:17:24.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.094 "strip_size_kb": 64, 00:17:24.094 "state": "configuring", 00:17:24.094 "raid_level": "raid0", 00:17:24.094 "superblock": false, 00:17:24.094 "num_base_bdevs": 3, 00:17:24.094 "num_base_bdevs_discovered": 1, 00:17:24.094 "num_base_bdevs_operational": 3, 00:17:24.094 "base_bdevs_list": [ 00:17:24.094 { 00:17:24.094 "name": null, 00:17:24.094 "uuid": "4cdfb415-ac21-4daf-8a6b-f5a974437a5e", 00:17:24.094 "is_configured": false, 00:17:24.094 "data_offset": 0, 00:17:24.094 "data_size": 65536 00:17:24.094 }, 00:17:24.094 { 00:17:24.094 "name": null, 00:17:24.094 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:24.094 "is_configured": false, 00:17:24.095 "data_offset": 0, 00:17:24.095 "data_size": 65536 00:17:24.095 }, 00:17:24.095 { 00:17:24.095 "name": "BaseBdev3", 00:17:24.095 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:24.095 "is_configured": true, 00:17:24.095 "data_offset": 0, 00:17:24.095 "data_size": 65536 00:17:24.095 } 00:17:24.095 ] 00:17:24.095 }' 00:17:24.095 05:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.095 05:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.351 [2024-11-20 05:27:56.128570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.351 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.352 "name": "Existed_Raid", 00:17:24.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.352 "strip_size_kb": 64, 00:17:24.352 "state": "configuring", 00:17:24.352 "raid_level": "raid0", 00:17:24.352 "superblock": false, 00:17:24.352 "num_base_bdevs": 3, 00:17:24.352 "num_base_bdevs_discovered": 2, 00:17:24.352 "num_base_bdevs_operational": 3, 00:17:24.352 "base_bdevs_list": [ 00:17:24.352 { 00:17:24.352 "name": null, 00:17:24.352 "uuid": "4cdfb415-ac21-4daf-8a6b-f5a974437a5e", 00:17:24.352 "is_configured": false, 00:17:24.352 "data_offset": 0, 00:17:24.352 "data_size": 65536 00:17:24.352 }, 00:17:24.352 { 00:17:24.352 "name": "BaseBdev2", 00:17:24.352 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:24.352 "is_configured": true, 00:17:24.352 "data_offset": 0, 00:17:24.352 "data_size": 65536 00:17:24.352 }, 00:17:24.352 { 00:17:24.352 "name": "BaseBdev3", 00:17:24.352 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:24.352 "is_configured": true, 00:17:24.352 "data_offset": 0, 00:17:24.352 "data_size": 65536 00:17:24.352 } 00:17:24.352 ] 00:17:24.352 }' 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.352 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4cdfb415-ac21-4daf-8a6b-f5a974437a5e 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.918 [2024-11-20 05:27:56.525646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:24.918 [2024-11-20 05:27:56.525704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:24.918 [2024-11-20 05:27:56.525713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:24.918 [2024-11-20 05:27:56.525935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:24.918 [2024-11-20 05:27:56.526048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:24.918 [2024-11-20 05:27:56.526056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:24.918 [2024-11-20 05:27:56.526282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.918 NewBaseBdev 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.918 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.919 [ 00:17:24.919 { 00:17:24.919 "name": "NewBaseBdev", 00:17:24.919 "aliases": [ 00:17:24.919 "4cdfb415-ac21-4daf-8a6b-f5a974437a5e" 00:17:24.919 ], 00:17:24.919 "product_name": "Malloc disk", 00:17:24.919 "block_size": 512, 00:17:24.919 "num_blocks": 65536, 00:17:24.919 "uuid": "4cdfb415-ac21-4daf-8a6b-f5a974437a5e", 00:17:24.919 "assigned_rate_limits": { 00:17:24.919 "rw_ios_per_sec": 0, 00:17:24.919 "rw_mbytes_per_sec": 0, 00:17:24.919 "r_mbytes_per_sec": 0, 00:17:24.919 "w_mbytes_per_sec": 0 00:17:24.919 }, 00:17:24.919 "claimed": true, 00:17:24.919 "claim_type": "exclusive_write", 00:17:24.919 "zoned": false, 00:17:24.919 "supported_io_types": { 00:17:24.919 "read": true, 00:17:24.919 "write": true, 00:17:24.919 "unmap": true, 00:17:24.919 "flush": true, 00:17:24.919 "reset": true, 00:17:24.919 "nvme_admin": false, 00:17:24.919 "nvme_io": false, 00:17:24.919 "nvme_io_md": false, 00:17:24.919 "write_zeroes": true, 00:17:24.919 "zcopy": true, 00:17:24.919 "get_zone_info": false, 00:17:24.919 "zone_management": false, 00:17:24.919 "zone_append": false, 00:17:24.919 "compare": false, 00:17:24.919 "compare_and_write": false, 00:17:24.919 "abort": true, 00:17:24.919 "seek_hole": false, 00:17:24.919 "seek_data": false, 00:17:24.919 "copy": true, 00:17:24.919 "nvme_iov_md": false 00:17:24.919 }, 00:17:24.919 "memory_domains": [ 00:17:24.919 { 00:17:24.919 "dma_device_id": "system", 00:17:24.919 "dma_device_type": 1 00:17:24.919 }, 00:17:24.919 { 00:17:24.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.919 "dma_device_type": 2 00:17:24.919 } 00:17:24.919 ], 00:17:24.919 "driver_specific": {} 00:17:24.919 } 00:17:24.919 ] 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.919 "name": "Existed_Raid", 00:17:24.919 "uuid": "30421491-932a-4b86-82ff-4abecf946e92", 00:17:24.919 "strip_size_kb": 64, 00:17:24.919 "state": "online", 00:17:24.919 "raid_level": "raid0", 00:17:24.919 "superblock": false, 00:17:24.919 "num_base_bdevs": 3, 00:17:24.919 "num_base_bdevs_discovered": 3, 00:17:24.919 "num_base_bdevs_operational": 3, 00:17:24.919 "base_bdevs_list": [ 00:17:24.919 { 00:17:24.919 "name": "NewBaseBdev", 00:17:24.919 "uuid": "4cdfb415-ac21-4daf-8a6b-f5a974437a5e", 00:17:24.919 "is_configured": true, 00:17:24.919 "data_offset": 0, 00:17:24.919 "data_size": 65536 00:17:24.919 }, 00:17:24.919 { 00:17:24.919 "name": "BaseBdev2", 00:17:24.919 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:24.919 "is_configured": true, 00:17:24.919 "data_offset": 0, 00:17:24.919 "data_size": 65536 00:17:24.919 }, 00:17:24.919 { 00:17:24.919 "name": "BaseBdev3", 00:17:24.919 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:24.919 "is_configured": true, 00:17:24.919 "data_offset": 0, 00:17:24.919 "data_size": 65536 00:17:24.919 } 00:17:24.919 ] 00:17:24.919 }' 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.919 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:25.177 [2024-11-20 05:27:56.862039] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.177 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:25.177 "name": "Existed_Raid", 00:17:25.177 "aliases": [ 00:17:25.177 "30421491-932a-4b86-82ff-4abecf946e92" 00:17:25.177 ], 00:17:25.177 "product_name": "Raid Volume", 00:17:25.177 "block_size": 512, 00:17:25.177 "num_blocks": 196608, 00:17:25.177 "uuid": "30421491-932a-4b86-82ff-4abecf946e92", 00:17:25.177 "assigned_rate_limits": { 00:17:25.177 "rw_ios_per_sec": 0, 00:17:25.177 "rw_mbytes_per_sec": 0, 00:17:25.177 "r_mbytes_per_sec": 0, 00:17:25.177 "w_mbytes_per_sec": 0 00:17:25.177 }, 00:17:25.177 "claimed": false, 00:17:25.177 "zoned": false, 00:17:25.177 "supported_io_types": { 00:17:25.177 "read": true, 00:17:25.177 "write": true, 00:17:25.177 "unmap": true, 00:17:25.177 "flush": true, 00:17:25.177 "reset": true, 00:17:25.177 "nvme_admin": false, 00:17:25.177 "nvme_io": false, 00:17:25.177 "nvme_io_md": false, 00:17:25.177 "write_zeroes": true, 00:17:25.177 "zcopy": false, 00:17:25.177 "get_zone_info": false, 00:17:25.177 "zone_management": false, 00:17:25.177 "zone_append": false, 00:17:25.177 "compare": false, 00:17:25.177 "compare_and_write": false, 00:17:25.177 "abort": false, 00:17:25.177 "seek_hole": false, 00:17:25.177 "seek_data": false, 00:17:25.177 "copy": false, 00:17:25.177 "nvme_iov_md": false 00:17:25.177 }, 00:17:25.177 "memory_domains": [ 00:17:25.177 { 00:17:25.177 "dma_device_id": "system", 00:17:25.177 "dma_device_type": 1 00:17:25.177 }, 00:17:25.177 { 00:17:25.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.177 "dma_device_type": 2 00:17:25.177 }, 00:17:25.177 { 00:17:25.177 "dma_device_id": "system", 00:17:25.177 "dma_device_type": 1 00:17:25.177 }, 00:17:25.177 { 00:17:25.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.177 "dma_device_type": 2 00:17:25.177 }, 00:17:25.177 { 00:17:25.177 "dma_device_id": "system", 00:17:25.177 "dma_device_type": 1 00:17:25.177 }, 00:17:25.177 { 00:17:25.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.177 "dma_device_type": 2 00:17:25.177 } 00:17:25.177 ], 00:17:25.177 "driver_specific": { 00:17:25.177 "raid": { 00:17:25.177 "uuid": "30421491-932a-4b86-82ff-4abecf946e92", 00:17:25.177 "strip_size_kb": 64, 00:17:25.177 "state": "online", 00:17:25.177 "raid_level": "raid0", 00:17:25.177 "superblock": false, 00:17:25.177 "num_base_bdevs": 3, 00:17:25.177 "num_base_bdevs_discovered": 3, 00:17:25.177 "num_base_bdevs_operational": 3, 00:17:25.177 "base_bdevs_list": [ 00:17:25.177 { 00:17:25.177 "name": "NewBaseBdev", 00:17:25.177 "uuid": "4cdfb415-ac21-4daf-8a6b-f5a974437a5e", 00:17:25.177 "is_configured": true, 00:17:25.177 "data_offset": 0, 00:17:25.177 "data_size": 65536 00:17:25.177 }, 00:17:25.177 { 00:17:25.177 "name": "BaseBdev2", 00:17:25.177 "uuid": "4aa55d74-fc3b-4099-872e-05b99763c67a", 00:17:25.177 "is_configured": true, 00:17:25.177 "data_offset": 0, 00:17:25.177 "data_size": 65536 00:17:25.177 }, 00:17:25.177 { 00:17:25.177 "name": "BaseBdev3", 00:17:25.178 "uuid": "b0f8767e-9698-43a5-98bd-7d5cc6d25c4e", 00:17:25.178 "is_configured": true, 00:17:25.178 "data_offset": 0, 00:17:25.178 "data_size": 65536 00:17:25.178 } 00:17:25.178 ] 00:17:25.178 } 00:17:25.178 } 00:17:25.178 }' 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:25.178 BaseBdev2 00:17:25.178 BaseBdev3' 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.178 05:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.435 [2024-11-20 05:27:57.057806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:25.435 [2024-11-20 05:27:57.057844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.435 [2024-11-20 05:27:57.057933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.435 [2024-11-20 05:27:57.057994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.435 [2024-11-20 05:27:57.058005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:25.435 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62459 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62459 ']' 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62459 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62459 00:17:25.436 killing process with pid 62459 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62459' 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62459 00:17:25.436 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62459 00:17:25.436 [2024-11-20 05:27:57.090269] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.436 [2024-11-20 05:27:57.252447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:26.381 ************************************ 00:17:26.381 END TEST raid_state_function_test 00:17:26.381 ************************************ 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:26.381 00:17:26.381 real 0m7.977s 00:17:26.381 user 0m12.714s 00:17:26.381 sys 0m1.412s 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.381 05:27:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:26.381 05:27:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:26.381 05:27:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:26.381 05:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.381 ************************************ 00:17:26.381 START TEST raid_state_function_test_sb 00:17:26.381 ************************************ 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:26.381 Process raid pid: 63054 00:17:26.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63054 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63054' 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63054 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63054 ']' 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:26.381 05:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.381 [2024-11-20 05:27:58.002951] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:26.381 [2024-11-20 05:27:58.003075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.381 [2024-11-20 05:27:58.156481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.639 [2024-11-20 05:27:58.263231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.639 [2024-11-20 05:27:58.389087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.639 [2024-11-20 05:27:58.389142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.204 [2024-11-20 05:27:58.844963] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.204 [2024-11-20 05:27:58.845024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.204 [2024-11-20 05:27:58.845034] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.204 [2024-11-20 05:27:58.845042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.204 [2024-11-20 05:27:58.845047] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:27.204 [2024-11-20 05:27:58.845055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.204 "name": "Existed_Raid", 00:17:27.204 "uuid": "a7fe5dba-3121-4509-a0c2-d475b21e6518", 00:17:27.204 "strip_size_kb": 64, 00:17:27.204 "state": "configuring", 00:17:27.204 "raid_level": "raid0", 00:17:27.204 "superblock": true, 00:17:27.204 "num_base_bdevs": 3, 00:17:27.204 "num_base_bdevs_discovered": 0, 00:17:27.204 "num_base_bdevs_operational": 3, 00:17:27.204 "base_bdevs_list": [ 00:17:27.204 { 00:17:27.204 "name": "BaseBdev1", 00:17:27.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.204 "is_configured": false, 00:17:27.204 "data_offset": 0, 00:17:27.204 "data_size": 0 00:17:27.204 }, 00:17:27.204 { 00:17:27.204 "name": "BaseBdev2", 00:17:27.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.204 "is_configured": false, 00:17:27.204 "data_offset": 0, 00:17:27.204 "data_size": 0 00:17:27.204 }, 00:17:27.204 { 00:17:27.204 "name": "BaseBdev3", 00:17:27.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.204 "is_configured": false, 00:17:27.204 "data_offset": 0, 00:17:27.204 "data_size": 0 00:17:27.204 } 00:17:27.204 ] 00:17:27.204 }' 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.204 05:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.462 [2024-11-20 05:27:59.160994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:27.462 [2024-11-20 05:27:59.161052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.462 [2024-11-20 05:27:59.168999] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.462 [2024-11-20 05:27:59.169056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.462 [2024-11-20 05:27:59.169064] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.462 [2024-11-20 05:27:59.169072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.462 [2024-11-20 05:27:59.169078] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:27.462 [2024-11-20 05:27:59.169085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.462 [2024-11-20 05:27:59.200469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.462 BaseBdev1 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.462 [ 00:17:27.462 { 00:17:27.462 "name": "BaseBdev1", 00:17:27.462 "aliases": [ 00:17:27.462 "4b4631fd-f334-4a8c-b707-67f9660b5529" 00:17:27.462 ], 00:17:27.462 "product_name": "Malloc disk", 00:17:27.462 "block_size": 512, 00:17:27.462 "num_blocks": 65536, 00:17:27.462 "uuid": "4b4631fd-f334-4a8c-b707-67f9660b5529", 00:17:27.462 "assigned_rate_limits": { 00:17:27.462 "rw_ios_per_sec": 0, 00:17:27.462 "rw_mbytes_per_sec": 0, 00:17:27.462 "r_mbytes_per_sec": 0, 00:17:27.462 "w_mbytes_per_sec": 0 00:17:27.462 }, 00:17:27.462 "claimed": true, 00:17:27.462 "claim_type": "exclusive_write", 00:17:27.462 "zoned": false, 00:17:27.462 "supported_io_types": { 00:17:27.462 "read": true, 00:17:27.462 "write": true, 00:17:27.462 "unmap": true, 00:17:27.462 "flush": true, 00:17:27.462 "reset": true, 00:17:27.462 "nvme_admin": false, 00:17:27.462 "nvme_io": false, 00:17:27.462 "nvme_io_md": false, 00:17:27.462 "write_zeroes": true, 00:17:27.462 "zcopy": true, 00:17:27.462 "get_zone_info": false, 00:17:27.462 "zone_management": false, 00:17:27.462 "zone_append": false, 00:17:27.462 "compare": false, 00:17:27.462 "compare_and_write": false, 00:17:27.462 "abort": true, 00:17:27.462 "seek_hole": false, 00:17:27.462 "seek_data": false, 00:17:27.462 "copy": true, 00:17:27.462 "nvme_iov_md": false 00:17:27.462 }, 00:17:27.462 "memory_domains": [ 00:17:27.462 { 00:17:27.462 "dma_device_id": "system", 00:17:27.462 "dma_device_type": 1 00:17:27.462 }, 00:17:27.462 { 00:17:27.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.462 "dma_device_type": 2 00:17:27.462 } 00:17:27.462 ], 00:17:27.462 "driver_specific": {} 00:17:27.462 } 00:17:27.462 ] 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.462 "name": "Existed_Raid", 00:17:27.462 "uuid": "7c912d40-5d7b-4442-8324-3355b29f9391", 00:17:27.462 "strip_size_kb": 64, 00:17:27.462 "state": "configuring", 00:17:27.462 "raid_level": "raid0", 00:17:27.462 "superblock": true, 00:17:27.462 "num_base_bdevs": 3, 00:17:27.462 "num_base_bdevs_discovered": 1, 00:17:27.462 "num_base_bdevs_operational": 3, 00:17:27.462 "base_bdevs_list": [ 00:17:27.462 { 00:17:27.462 "name": "BaseBdev1", 00:17:27.462 "uuid": "4b4631fd-f334-4a8c-b707-67f9660b5529", 00:17:27.462 "is_configured": true, 00:17:27.462 "data_offset": 2048, 00:17:27.462 "data_size": 63488 00:17:27.462 }, 00:17:27.462 { 00:17:27.462 "name": "BaseBdev2", 00:17:27.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.462 "is_configured": false, 00:17:27.462 "data_offset": 0, 00:17:27.462 "data_size": 0 00:17:27.462 }, 00:17:27.462 { 00:17:27.462 "name": "BaseBdev3", 00:17:27.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.462 "is_configured": false, 00:17:27.462 "data_offset": 0, 00:17:27.462 "data_size": 0 00:17:27.462 } 00:17:27.462 ] 00:17:27.462 }' 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.462 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.027 [2024-11-20 05:27:59.556606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.027 [2024-11-20 05:27:59.556800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.027 [2024-11-20 05:27:59.564676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.027 [2024-11-20 05:27:59.566526] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.027 [2024-11-20 05:27:59.566659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.027 [2024-11-20 05:27:59.566710] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:28.027 [2024-11-20 05:27:59.566732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.027 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.027 "name": "Existed_Raid", 00:17:28.027 "uuid": "40064415-7cb7-4c0d-a0a7-8a3de541f09f", 00:17:28.027 "strip_size_kb": 64, 00:17:28.027 "state": "configuring", 00:17:28.027 "raid_level": "raid0", 00:17:28.027 "superblock": true, 00:17:28.027 "num_base_bdevs": 3, 00:17:28.027 "num_base_bdevs_discovered": 1, 00:17:28.027 "num_base_bdevs_operational": 3, 00:17:28.027 "base_bdevs_list": [ 00:17:28.027 { 00:17:28.027 "name": "BaseBdev1", 00:17:28.027 "uuid": "4b4631fd-f334-4a8c-b707-67f9660b5529", 00:17:28.027 "is_configured": true, 00:17:28.028 "data_offset": 2048, 00:17:28.028 "data_size": 63488 00:17:28.028 }, 00:17:28.028 { 00:17:28.028 "name": "BaseBdev2", 00:17:28.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.028 "is_configured": false, 00:17:28.028 "data_offset": 0, 00:17:28.028 "data_size": 0 00:17:28.028 }, 00:17:28.028 { 00:17:28.028 "name": "BaseBdev3", 00:17:28.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.028 "is_configured": false, 00:17:28.028 "data_offset": 0, 00:17:28.028 "data_size": 0 00:17:28.028 } 00:17:28.028 ] 00:17:28.028 }' 00:17:28.028 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.028 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.285 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:28.285 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.285 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.285 [2024-11-20 05:27:59.950230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.285 BaseBdev2 00:17:28.285 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.285 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:28.285 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:28.285 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.285 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.286 [ 00:17:28.286 { 00:17:28.286 "name": "BaseBdev2", 00:17:28.286 "aliases": [ 00:17:28.286 "8765f711-8704-4b44-8a54-8a094ff22081" 00:17:28.286 ], 00:17:28.286 "product_name": "Malloc disk", 00:17:28.286 "block_size": 512, 00:17:28.286 "num_blocks": 65536, 00:17:28.286 "uuid": "8765f711-8704-4b44-8a54-8a094ff22081", 00:17:28.286 "assigned_rate_limits": { 00:17:28.286 "rw_ios_per_sec": 0, 00:17:28.286 "rw_mbytes_per_sec": 0, 00:17:28.286 "r_mbytes_per_sec": 0, 00:17:28.286 "w_mbytes_per_sec": 0 00:17:28.286 }, 00:17:28.286 "claimed": true, 00:17:28.286 "claim_type": "exclusive_write", 00:17:28.286 "zoned": false, 00:17:28.286 "supported_io_types": { 00:17:28.286 "read": true, 00:17:28.286 "write": true, 00:17:28.286 "unmap": true, 00:17:28.286 "flush": true, 00:17:28.286 "reset": true, 00:17:28.286 "nvme_admin": false, 00:17:28.286 "nvme_io": false, 00:17:28.286 "nvme_io_md": false, 00:17:28.286 "write_zeroes": true, 00:17:28.286 "zcopy": true, 00:17:28.286 "get_zone_info": false, 00:17:28.286 "zone_management": false, 00:17:28.286 "zone_append": false, 00:17:28.286 "compare": false, 00:17:28.286 "compare_and_write": false, 00:17:28.286 "abort": true, 00:17:28.286 "seek_hole": false, 00:17:28.286 "seek_data": false, 00:17:28.286 "copy": true, 00:17:28.286 "nvme_iov_md": false 00:17:28.286 }, 00:17:28.286 "memory_domains": [ 00:17:28.286 { 00:17:28.286 "dma_device_id": "system", 00:17:28.286 "dma_device_type": 1 00:17:28.286 }, 00:17:28.286 { 00:17:28.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.286 "dma_device_type": 2 00:17:28.286 } 00:17:28.286 ], 00:17:28.286 "driver_specific": {} 00:17:28.286 } 00:17:28.286 ] 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.286 05:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.286 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.286 "name": "Existed_Raid", 00:17:28.286 "uuid": "40064415-7cb7-4c0d-a0a7-8a3de541f09f", 00:17:28.286 "strip_size_kb": 64, 00:17:28.286 "state": "configuring", 00:17:28.286 "raid_level": "raid0", 00:17:28.286 "superblock": true, 00:17:28.286 "num_base_bdevs": 3, 00:17:28.286 "num_base_bdevs_discovered": 2, 00:17:28.286 "num_base_bdevs_operational": 3, 00:17:28.286 "base_bdevs_list": [ 00:17:28.286 { 00:17:28.286 "name": "BaseBdev1", 00:17:28.286 "uuid": "4b4631fd-f334-4a8c-b707-67f9660b5529", 00:17:28.286 "is_configured": true, 00:17:28.286 "data_offset": 2048, 00:17:28.286 "data_size": 63488 00:17:28.286 }, 00:17:28.286 { 00:17:28.286 "name": "BaseBdev2", 00:17:28.286 "uuid": "8765f711-8704-4b44-8a54-8a094ff22081", 00:17:28.286 "is_configured": true, 00:17:28.286 "data_offset": 2048, 00:17:28.286 "data_size": 63488 00:17:28.286 }, 00:17:28.286 { 00:17:28.286 "name": "BaseBdev3", 00:17:28.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.286 "is_configured": false, 00:17:28.286 "data_offset": 0, 00:17:28.286 "data_size": 0 00:17:28.286 } 00:17:28.286 ] 00:17:28.286 }' 00:17:28.286 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.286 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.568 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:28.568 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.568 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.862 [2024-11-20 05:28:00.381908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.862 [2024-11-20 05:28:00.382180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:28.862 [2024-11-20 05:28:00.382200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:28.862 [2024-11-20 05:28:00.382464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:28.862 [2024-11-20 05:28:00.382593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:28.862 [2024-11-20 05:28:00.382601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:28.862 [2024-11-20 05:28:00.382722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.862 BaseBdev3 00:17:28.862 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.862 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:28.862 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:28.862 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.862 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:28.862 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 [ 00:17:28.863 { 00:17:28.863 "name": "BaseBdev3", 00:17:28.863 "aliases": [ 00:17:28.863 "c1f2b6ea-782a-4deb-af65-d212d2d218b5" 00:17:28.863 ], 00:17:28.863 "product_name": "Malloc disk", 00:17:28.863 "block_size": 512, 00:17:28.863 "num_blocks": 65536, 00:17:28.863 "uuid": "c1f2b6ea-782a-4deb-af65-d212d2d218b5", 00:17:28.863 "assigned_rate_limits": { 00:17:28.863 "rw_ios_per_sec": 0, 00:17:28.863 "rw_mbytes_per_sec": 0, 00:17:28.863 "r_mbytes_per_sec": 0, 00:17:28.863 "w_mbytes_per_sec": 0 00:17:28.863 }, 00:17:28.863 "claimed": true, 00:17:28.863 "claim_type": "exclusive_write", 00:17:28.863 "zoned": false, 00:17:28.863 "supported_io_types": { 00:17:28.863 "read": true, 00:17:28.863 "write": true, 00:17:28.863 "unmap": true, 00:17:28.863 "flush": true, 00:17:28.863 "reset": true, 00:17:28.863 "nvme_admin": false, 00:17:28.863 "nvme_io": false, 00:17:28.863 "nvme_io_md": false, 00:17:28.863 "write_zeroes": true, 00:17:28.863 "zcopy": true, 00:17:28.863 "get_zone_info": false, 00:17:28.863 "zone_management": false, 00:17:28.863 "zone_append": false, 00:17:28.863 "compare": false, 00:17:28.863 "compare_and_write": false, 00:17:28.863 "abort": true, 00:17:28.863 "seek_hole": false, 00:17:28.863 "seek_data": false, 00:17:28.863 "copy": true, 00:17:28.863 "nvme_iov_md": false 00:17:28.863 }, 00:17:28.863 "memory_domains": [ 00:17:28.863 { 00:17:28.863 "dma_device_id": "system", 00:17:28.863 "dma_device_type": 1 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.863 "dma_device_type": 2 00:17:28.863 } 00:17:28.863 ], 00:17:28.863 "driver_specific": {} 00:17:28.863 } 00:17:28.863 ] 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.863 "name": "Existed_Raid", 00:17:28.863 "uuid": "40064415-7cb7-4c0d-a0a7-8a3de541f09f", 00:17:28.863 "strip_size_kb": 64, 00:17:28.863 "state": "online", 00:17:28.863 "raid_level": "raid0", 00:17:28.863 "superblock": true, 00:17:28.863 "num_base_bdevs": 3, 00:17:28.863 "num_base_bdevs_discovered": 3, 00:17:28.863 "num_base_bdevs_operational": 3, 00:17:28.863 "base_bdevs_list": [ 00:17:28.863 { 00:17:28.863 "name": "BaseBdev1", 00:17:28.863 "uuid": "4b4631fd-f334-4a8c-b707-67f9660b5529", 00:17:28.863 "is_configured": true, 00:17:28.863 "data_offset": 2048, 00:17:28.863 "data_size": 63488 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "name": "BaseBdev2", 00:17:28.863 "uuid": "8765f711-8704-4b44-8a54-8a094ff22081", 00:17:28.863 "is_configured": true, 00:17:28.863 "data_offset": 2048, 00:17:28.863 "data_size": 63488 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "name": "BaseBdev3", 00:17:28.863 "uuid": "c1f2b6ea-782a-4deb-af65-d212d2d218b5", 00:17:28.863 "is_configured": true, 00:17:28.863 "data_offset": 2048, 00:17:28.863 "data_size": 63488 00:17:28.863 } 00:17:28.863 ] 00:17:28.863 }' 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.863 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.126 [2024-11-20 05:28:00.746345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.126 "name": "Existed_Raid", 00:17:29.126 "aliases": [ 00:17:29.126 "40064415-7cb7-4c0d-a0a7-8a3de541f09f" 00:17:29.126 ], 00:17:29.126 "product_name": "Raid Volume", 00:17:29.126 "block_size": 512, 00:17:29.126 "num_blocks": 190464, 00:17:29.126 "uuid": "40064415-7cb7-4c0d-a0a7-8a3de541f09f", 00:17:29.126 "assigned_rate_limits": { 00:17:29.126 "rw_ios_per_sec": 0, 00:17:29.126 "rw_mbytes_per_sec": 0, 00:17:29.126 "r_mbytes_per_sec": 0, 00:17:29.126 "w_mbytes_per_sec": 0 00:17:29.126 }, 00:17:29.126 "claimed": false, 00:17:29.126 "zoned": false, 00:17:29.126 "supported_io_types": { 00:17:29.126 "read": true, 00:17:29.126 "write": true, 00:17:29.126 "unmap": true, 00:17:29.126 "flush": true, 00:17:29.126 "reset": true, 00:17:29.126 "nvme_admin": false, 00:17:29.126 "nvme_io": false, 00:17:29.126 "nvme_io_md": false, 00:17:29.126 "write_zeroes": true, 00:17:29.126 "zcopy": false, 00:17:29.126 "get_zone_info": false, 00:17:29.126 "zone_management": false, 00:17:29.126 "zone_append": false, 00:17:29.126 "compare": false, 00:17:29.126 "compare_and_write": false, 00:17:29.126 "abort": false, 00:17:29.126 "seek_hole": false, 00:17:29.126 "seek_data": false, 00:17:29.126 "copy": false, 00:17:29.126 "nvme_iov_md": false 00:17:29.126 }, 00:17:29.126 "memory_domains": [ 00:17:29.126 { 00:17:29.126 "dma_device_id": "system", 00:17:29.126 "dma_device_type": 1 00:17:29.126 }, 00:17:29.126 { 00:17:29.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.126 "dma_device_type": 2 00:17:29.126 }, 00:17:29.126 { 00:17:29.126 "dma_device_id": "system", 00:17:29.126 "dma_device_type": 1 00:17:29.126 }, 00:17:29.126 { 00:17:29.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.126 "dma_device_type": 2 00:17:29.126 }, 00:17:29.126 { 00:17:29.126 "dma_device_id": "system", 00:17:29.126 "dma_device_type": 1 00:17:29.126 }, 00:17:29.126 { 00:17:29.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.126 "dma_device_type": 2 00:17:29.126 } 00:17:29.126 ], 00:17:29.126 "driver_specific": { 00:17:29.126 "raid": { 00:17:29.126 "uuid": "40064415-7cb7-4c0d-a0a7-8a3de541f09f", 00:17:29.126 "strip_size_kb": 64, 00:17:29.126 "state": "online", 00:17:29.126 "raid_level": "raid0", 00:17:29.126 "superblock": true, 00:17:29.126 "num_base_bdevs": 3, 00:17:29.126 "num_base_bdevs_discovered": 3, 00:17:29.126 "num_base_bdevs_operational": 3, 00:17:29.126 "base_bdevs_list": [ 00:17:29.126 { 00:17:29.126 "name": "BaseBdev1", 00:17:29.126 "uuid": "4b4631fd-f334-4a8c-b707-67f9660b5529", 00:17:29.126 "is_configured": true, 00:17:29.126 "data_offset": 2048, 00:17:29.126 "data_size": 63488 00:17:29.126 }, 00:17:29.126 { 00:17:29.126 "name": "BaseBdev2", 00:17:29.126 "uuid": "8765f711-8704-4b44-8a54-8a094ff22081", 00:17:29.126 "is_configured": true, 00:17:29.126 "data_offset": 2048, 00:17:29.126 "data_size": 63488 00:17:29.126 }, 00:17:29.126 { 00:17:29.126 "name": "BaseBdev3", 00:17:29.126 "uuid": "c1f2b6ea-782a-4deb-af65-d212d2d218b5", 00:17:29.126 "is_configured": true, 00:17:29.126 "data_offset": 2048, 00:17:29.126 "data_size": 63488 00:17:29.126 } 00:17:29.126 ] 00:17:29.126 } 00:17:29.126 } 00:17:29.126 }' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:29.126 BaseBdev2 00:17:29.126 BaseBdev3' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.126 05:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.126 [2024-11-20 05:28:00.950126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.126 [2024-11-20 05:28:00.950158] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.126 [2024-11-20 05:28:00.950209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.385 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.385 "name": "Existed_Raid", 00:17:29.385 "uuid": "40064415-7cb7-4c0d-a0a7-8a3de541f09f", 00:17:29.385 "strip_size_kb": 64, 00:17:29.385 "state": "offline", 00:17:29.385 "raid_level": "raid0", 00:17:29.385 "superblock": true, 00:17:29.385 "num_base_bdevs": 3, 00:17:29.385 "num_base_bdevs_discovered": 2, 00:17:29.385 "num_base_bdevs_operational": 2, 00:17:29.385 "base_bdevs_list": [ 00:17:29.385 { 00:17:29.386 "name": null, 00:17:29.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.386 "is_configured": false, 00:17:29.386 "data_offset": 0, 00:17:29.386 "data_size": 63488 00:17:29.386 }, 00:17:29.386 { 00:17:29.386 "name": "BaseBdev2", 00:17:29.386 "uuid": "8765f711-8704-4b44-8a54-8a094ff22081", 00:17:29.386 "is_configured": true, 00:17:29.386 "data_offset": 2048, 00:17:29.386 "data_size": 63488 00:17:29.386 }, 00:17:29.386 { 00:17:29.386 "name": "BaseBdev3", 00:17:29.386 "uuid": "c1f2b6ea-782a-4deb-af65-d212d2d218b5", 00:17:29.386 "is_configured": true, 00:17:29.386 "data_offset": 2048, 00:17:29.386 "data_size": 63488 00:17:29.386 } 00:17:29.386 ] 00:17:29.386 }' 00:17:29.386 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.386 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.645 [2024-11-20 05:28:01.346167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.645 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.645 [2024-11-20 05:28:01.440041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:29.645 [2024-11-20 05:28:01.440101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.903 BaseBdev2 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.903 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.903 [ 00:17:29.903 { 00:17:29.903 "name": "BaseBdev2", 00:17:29.903 "aliases": [ 00:17:29.903 "f473d4be-1844-438a-bd4c-e3357326c7c4" 00:17:29.903 ], 00:17:29.903 "product_name": "Malloc disk", 00:17:29.903 "block_size": 512, 00:17:29.903 "num_blocks": 65536, 00:17:29.903 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:29.903 "assigned_rate_limits": { 00:17:29.903 "rw_ios_per_sec": 0, 00:17:29.903 "rw_mbytes_per_sec": 0, 00:17:29.903 "r_mbytes_per_sec": 0, 00:17:29.903 "w_mbytes_per_sec": 0 00:17:29.903 }, 00:17:29.903 "claimed": false, 00:17:29.903 "zoned": false, 00:17:29.903 "supported_io_types": { 00:17:29.903 "read": true, 00:17:29.903 "write": true, 00:17:29.903 "unmap": true, 00:17:29.903 "flush": true, 00:17:29.903 "reset": true, 00:17:29.903 "nvme_admin": false, 00:17:29.903 "nvme_io": false, 00:17:29.903 "nvme_io_md": false, 00:17:29.903 "write_zeroes": true, 00:17:29.903 "zcopy": true, 00:17:29.904 "get_zone_info": false, 00:17:29.904 "zone_management": false, 00:17:29.904 "zone_append": false, 00:17:29.904 "compare": false, 00:17:29.904 "compare_and_write": false, 00:17:29.904 "abort": true, 00:17:29.904 "seek_hole": false, 00:17:29.904 "seek_data": false, 00:17:29.904 "copy": true, 00:17:29.904 "nvme_iov_md": false 00:17:29.904 }, 00:17:29.904 "memory_domains": [ 00:17:29.904 { 00:17:29.904 "dma_device_id": "system", 00:17:29.904 "dma_device_type": 1 00:17:29.904 }, 00:17:29.904 { 00:17:29.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.904 "dma_device_type": 2 00:17:29.904 } 00:17:29.904 ], 00:17:29.904 "driver_specific": {} 00:17:29.904 } 00:17:29.904 ] 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.904 BaseBdev3 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.904 [ 00:17:29.904 { 00:17:29.904 "name": "BaseBdev3", 00:17:29.904 "aliases": [ 00:17:29.904 "779a0608-2393-4be1-8b8a-488924549a6e" 00:17:29.904 ], 00:17:29.904 "product_name": "Malloc disk", 00:17:29.904 "block_size": 512, 00:17:29.904 "num_blocks": 65536, 00:17:29.904 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:29.904 "assigned_rate_limits": { 00:17:29.904 "rw_ios_per_sec": 0, 00:17:29.904 "rw_mbytes_per_sec": 0, 00:17:29.904 "r_mbytes_per_sec": 0, 00:17:29.904 "w_mbytes_per_sec": 0 00:17:29.904 }, 00:17:29.904 "claimed": false, 00:17:29.904 "zoned": false, 00:17:29.904 "supported_io_types": { 00:17:29.904 "read": true, 00:17:29.904 "write": true, 00:17:29.904 "unmap": true, 00:17:29.904 "flush": true, 00:17:29.904 "reset": true, 00:17:29.904 "nvme_admin": false, 00:17:29.904 "nvme_io": false, 00:17:29.904 "nvme_io_md": false, 00:17:29.904 "write_zeroes": true, 00:17:29.904 "zcopy": true, 00:17:29.904 "get_zone_info": false, 00:17:29.904 "zone_management": false, 00:17:29.904 "zone_append": false, 00:17:29.904 "compare": false, 00:17:29.904 "compare_and_write": false, 00:17:29.904 "abort": true, 00:17:29.904 "seek_hole": false, 00:17:29.904 "seek_data": false, 00:17:29.904 "copy": true, 00:17:29.904 "nvme_iov_md": false 00:17:29.904 }, 00:17:29.904 "memory_domains": [ 00:17:29.904 { 00:17:29.904 "dma_device_id": "system", 00:17:29.904 "dma_device_type": 1 00:17:29.904 }, 00:17:29.904 { 00:17:29.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.904 "dma_device_type": 2 00:17:29.904 } 00:17:29.904 ], 00:17:29.904 "driver_specific": {} 00:17:29.904 } 00:17:29.904 ] 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.904 [2024-11-20 05:28:01.638894] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.904 [2024-11-20 05:28:01.639073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.904 [2024-11-20 05:28:01.639146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.904 [2024-11-20 05:28:01.640916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.904 "name": "Existed_Raid", 00:17:29.904 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:29.904 "strip_size_kb": 64, 00:17:29.904 "state": "configuring", 00:17:29.904 "raid_level": "raid0", 00:17:29.904 "superblock": true, 00:17:29.904 "num_base_bdevs": 3, 00:17:29.904 "num_base_bdevs_discovered": 2, 00:17:29.904 "num_base_bdevs_operational": 3, 00:17:29.904 "base_bdevs_list": [ 00:17:29.904 { 00:17:29.904 "name": "BaseBdev1", 00:17:29.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.904 "is_configured": false, 00:17:29.904 "data_offset": 0, 00:17:29.904 "data_size": 0 00:17:29.904 }, 00:17:29.904 { 00:17:29.904 "name": "BaseBdev2", 00:17:29.904 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:29.904 "is_configured": true, 00:17:29.904 "data_offset": 2048, 00:17:29.904 "data_size": 63488 00:17:29.904 }, 00:17:29.904 { 00:17:29.904 "name": "BaseBdev3", 00:17:29.904 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:29.904 "is_configured": true, 00:17:29.904 "data_offset": 2048, 00:17:29.904 "data_size": 63488 00:17:29.904 } 00:17:29.904 ] 00:17:29.904 }' 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.904 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.469 05:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:30.470 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.470 05:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.470 [2024-11-20 05:28:02.002976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.470 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.470 "name": "Existed_Raid", 00:17:30.470 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:30.470 "strip_size_kb": 64, 00:17:30.470 "state": "configuring", 00:17:30.470 "raid_level": "raid0", 00:17:30.470 "superblock": true, 00:17:30.470 "num_base_bdevs": 3, 00:17:30.470 "num_base_bdevs_discovered": 1, 00:17:30.470 "num_base_bdevs_operational": 3, 00:17:30.470 "base_bdevs_list": [ 00:17:30.470 { 00:17:30.470 "name": "BaseBdev1", 00:17:30.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.470 "is_configured": false, 00:17:30.470 "data_offset": 0, 00:17:30.470 "data_size": 0 00:17:30.470 }, 00:17:30.470 { 00:17:30.470 "name": null, 00:17:30.470 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:30.470 "is_configured": false, 00:17:30.470 "data_offset": 0, 00:17:30.470 "data_size": 63488 00:17:30.470 }, 00:17:30.470 { 00:17:30.470 "name": "BaseBdev3", 00:17:30.470 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:30.470 "is_configured": true, 00:17:30.471 "data_offset": 2048, 00:17:30.471 "data_size": 63488 00:17:30.471 } 00:17:30.471 ] 00:17:30.471 }' 00:17:30.471 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.471 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.729 [2024-11-20 05:28:02.399802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.729 BaseBdev1 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.729 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.729 [ 00:17:30.729 { 00:17:30.729 "name": "BaseBdev1", 00:17:30.729 "aliases": [ 00:17:30.729 "b8ade0f4-350d-469d-a674-3dce7b63e35c" 00:17:30.729 ], 00:17:30.729 "product_name": "Malloc disk", 00:17:30.729 "block_size": 512, 00:17:30.729 "num_blocks": 65536, 00:17:30.729 "uuid": "b8ade0f4-350d-469d-a674-3dce7b63e35c", 00:17:30.729 "assigned_rate_limits": { 00:17:30.729 "rw_ios_per_sec": 0, 00:17:30.729 "rw_mbytes_per_sec": 0, 00:17:30.729 "r_mbytes_per_sec": 0, 00:17:30.729 "w_mbytes_per_sec": 0 00:17:30.729 }, 00:17:30.729 "claimed": true, 00:17:30.729 "claim_type": "exclusive_write", 00:17:30.729 "zoned": false, 00:17:30.729 "supported_io_types": { 00:17:30.729 "read": true, 00:17:30.729 "write": true, 00:17:30.729 "unmap": true, 00:17:30.729 "flush": true, 00:17:30.729 "reset": true, 00:17:30.729 "nvme_admin": false, 00:17:30.729 "nvme_io": false, 00:17:30.729 "nvme_io_md": false, 00:17:30.729 "write_zeroes": true, 00:17:30.729 "zcopy": true, 00:17:30.729 "get_zone_info": false, 00:17:30.729 "zone_management": false, 00:17:30.729 "zone_append": false, 00:17:30.729 "compare": false, 00:17:30.729 "compare_and_write": false, 00:17:30.729 "abort": true, 00:17:30.729 "seek_hole": false, 00:17:30.729 "seek_data": false, 00:17:30.729 "copy": true, 00:17:30.729 "nvme_iov_md": false 00:17:30.729 }, 00:17:30.729 "memory_domains": [ 00:17:30.729 { 00:17:30.729 "dma_device_id": "system", 00:17:30.729 "dma_device_type": 1 00:17:30.729 }, 00:17:30.729 { 00:17:30.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.729 "dma_device_type": 2 00:17:30.729 } 00:17:30.729 ], 00:17:30.730 "driver_specific": {} 00:17:30.730 } 00:17:30.730 ] 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.730 "name": "Existed_Raid", 00:17:30.730 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:30.730 "strip_size_kb": 64, 00:17:30.730 "state": "configuring", 00:17:30.730 "raid_level": "raid0", 00:17:30.730 "superblock": true, 00:17:30.730 "num_base_bdevs": 3, 00:17:30.730 "num_base_bdevs_discovered": 2, 00:17:30.730 "num_base_bdevs_operational": 3, 00:17:30.730 "base_bdevs_list": [ 00:17:30.730 { 00:17:30.730 "name": "BaseBdev1", 00:17:30.730 "uuid": "b8ade0f4-350d-469d-a674-3dce7b63e35c", 00:17:30.730 "is_configured": true, 00:17:30.730 "data_offset": 2048, 00:17:30.730 "data_size": 63488 00:17:30.730 }, 00:17:30.730 { 00:17:30.730 "name": null, 00:17:30.730 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:30.730 "is_configured": false, 00:17:30.730 "data_offset": 0, 00:17:30.730 "data_size": 63488 00:17:30.730 }, 00:17:30.730 { 00:17:30.730 "name": "BaseBdev3", 00:17:30.730 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:30.730 "is_configured": true, 00:17:30.730 "data_offset": 2048, 00:17:30.730 "data_size": 63488 00:17:30.730 } 00:17:30.730 ] 00:17:30.730 }' 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.730 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.988 [2024-11-20 05:28:02.799947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.988 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.246 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.246 "name": "Existed_Raid", 00:17:31.246 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:31.246 "strip_size_kb": 64, 00:17:31.246 "state": "configuring", 00:17:31.246 "raid_level": "raid0", 00:17:31.246 "superblock": true, 00:17:31.246 "num_base_bdevs": 3, 00:17:31.246 "num_base_bdevs_discovered": 1, 00:17:31.246 "num_base_bdevs_operational": 3, 00:17:31.246 "base_bdevs_list": [ 00:17:31.246 { 00:17:31.246 "name": "BaseBdev1", 00:17:31.246 "uuid": "b8ade0f4-350d-469d-a674-3dce7b63e35c", 00:17:31.246 "is_configured": true, 00:17:31.246 "data_offset": 2048, 00:17:31.246 "data_size": 63488 00:17:31.246 }, 00:17:31.246 { 00:17:31.246 "name": null, 00:17:31.246 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:31.246 "is_configured": false, 00:17:31.246 "data_offset": 0, 00:17:31.246 "data_size": 63488 00:17:31.246 }, 00:17:31.246 { 00:17:31.246 "name": null, 00:17:31.246 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:31.246 "is_configured": false, 00:17:31.246 "data_offset": 0, 00:17:31.246 "data_size": 63488 00:17:31.246 } 00:17:31.246 ] 00:17:31.246 }' 00:17:31.246 05:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.246 05:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.505 [2024-11-20 05:28:03.144053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.505 "name": "Existed_Raid", 00:17:31.505 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:31.505 "strip_size_kb": 64, 00:17:31.505 "state": "configuring", 00:17:31.505 "raid_level": "raid0", 00:17:31.505 "superblock": true, 00:17:31.505 "num_base_bdevs": 3, 00:17:31.505 "num_base_bdevs_discovered": 2, 00:17:31.505 "num_base_bdevs_operational": 3, 00:17:31.505 "base_bdevs_list": [ 00:17:31.505 { 00:17:31.505 "name": "BaseBdev1", 00:17:31.505 "uuid": "b8ade0f4-350d-469d-a674-3dce7b63e35c", 00:17:31.505 "is_configured": true, 00:17:31.505 "data_offset": 2048, 00:17:31.505 "data_size": 63488 00:17:31.505 }, 00:17:31.505 { 00:17:31.505 "name": null, 00:17:31.505 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:31.505 "is_configured": false, 00:17:31.505 "data_offset": 0, 00:17:31.505 "data_size": 63488 00:17:31.505 }, 00:17:31.505 { 00:17:31.505 "name": "BaseBdev3", 00:17:31.505 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:31.505 "is_configured": true, 00:17:31.505 "data_offset": 2048, 00:17:31.505 "data_size": 63488 00:17:31.505 } 00:17:31.505 ] 00:17:31.505 }' 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.505 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.781 [2024-11-20 05:28:03.524132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.781 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.045 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.045 "name": "Existed_Raid", 00:17:32.045 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:32.045 "strip_size_kb": 64, 00:17:32.045 "state": "configuring", 00:17:32.045 "raid_level": "raid0", 00:17:32.045 "superblock": true, 00:17:32.045 "num_base_bdevs": 3, 00:17:32.045 "num_base_bdevs_discovered": 1, 00:17:32.045 "num_base_bdevs_operational": 3, 00:17:32.045 "base_bdevs_list": [ 00:17:32.045 { 00:17:32.045 "name": null, 00:17:32.045 "uuid": "b8ade0f4-350d-469d-a674-3dce7b63e35c", 00:17:32.045 "is_configured": false, 00:17:32.045 "data_offset": 0, 00:17:32.046 "data_size": 63488 00:17:32.046 }, 00:17:32.046 { 00:17:32.046 "name": null, 00:17:32.046 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:32.046 "is_configured": false, 00:17:32.046 "data_offset": 0, 00:17:32.046 "data_size": 63488 00:17:32.046 }, 00:17:32.046 { 00:17:32.046 "name": "BaseBdev3", 00:17:32.046 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:32.046 "is_configured": true, 00:17:32.046 "data_offset": 2048, 00:17:32.046 "data_size": 63488 00:17:32.046 } 00:17:32.046 ] 00:17:32.046 }' 00:17:32.046 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.046 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.304 [2024-11-20 05:28:03.934681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.304 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.305 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.305 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.305 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.305 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.305 "name": "Existed_Raid", 00:17:32.305 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:32.305 "strip_size_kb": 64, 00:17:32.305 "state": "configuring", 00:17:32.305 "raid_level": "raid0", 00:17:32.305 "superblock": true, 00:17:32.305 "num_base_bdevs": 3, 00:17:32.305 "num_base_bdevs_discovered": 2, 00:17:32.305 "num_base_bdevs_operational": 3, 00:17:32.305 "base_bdevs_list": [ 00:17:32.305 { 00:17:32.305 "name": null, 00:17:32.305 "uuid": "b8ade0f4-350d-469d-a674-3dce7b63e35c", 00:17:32.305 "is_configured": false, 00:17:32.305 "data_offset": 0, 00:17:32.305 "data_size": 63488 00:17:32.305 }, 00:17:32.305 { 00:17:32.305 "name": "BaseBdev2", 00:17:32.305 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:32.305 "is_configured": true, 00:17:32.305 "data_offset": 2048, 00:17:32.305 "data_size": 63488 00:17:32.305 }, 00:17:32.305 { 00:17:32.305 "name": "BaseBdev3", 00:17:32.305 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:32.305 "is_configured": true, 00:17:32.305 "data_offset": 2048, 00:17:32.305 "data_size": 63488 00:17:32.305 } 00:17:32.305 ] 00:17:32.305 }' 00:17:32.305 05:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.305 05:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b8ade0f4-350d-469d-a674-3dce7b63e35c 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.564 [2024-11-20 05:28:04.343550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:32.564 [2024-11-20 05:28:04.343768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:32.564 [2024-11-20 05:28:04.343781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:32.564 [2024-11-20 05:28:04.344012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:32.564 NewBaseBdev 00:17:32.564 [2024-11-20 05:28:04.344122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:32.564 [2024-11-20 05:28:04.344129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:32.564 [2024-11-20 05:28:04.344241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.564 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.564 [ 00:17:32.564 { 00:17:32.564 "name": "NewBaseBdev", 00:17:32.564 "aliases": [ 00:17:32.564 "b8ade0f4-350d-469d-a674-3dce7b63e35c" 00:17:32.564 ], 00:17:32.564 "product_name": "Malloc disk", 00:17:32.564 "block_size": 512, 00:17:32.564 "num_blocks": 65536, 00:17:32.564 "uuid": "b8ade0f4-350d-469d-a674-3dce7b63e35c", 00:17:32.564 "assigned_rate_limits": { 00:17:32.564 "rw_ios_per_sec": 0, 00:17:32.564 "rw_mbytes_per_sec": 0, 00:17:32.564 "r_mbytes_per_sec": 0, 00:17:32.564 "w_mbytes_per_sec": 0 00:17:32.564 }, 00:17:32.564 "claimed": true, 00:17:32.564 "claim_type": "exclusive_write", 00:17:32.564 "zoned": false, 00:17:32.564 "supported_io_types": { 00:17:32.564 "read": true, 00:17:32.564 "write": true, 00:17:32.564 "unmap": true, 00:17:32.564 "flush": true, 00:17:32.565 "reset": true, 00:17:32.565 "nvme_admin": false, 00:17:32.565 "nvme_io": false, 00:17:32.565 "nvme_io_md": false, 00:17:32.565 "write_zeroes": true, 00:17:32.565 "zcopy": true, 00:17:32.565 "get_zone_info": false, 00:17:32.565 "zone_management": false, 00:17:32.565 "zone_append": false, 00:17:32.565 "compare": false, 00:17:32.565 "compare_and_write": false, 00:17:32.565 "abort": true, 00:17:32.565 "seek_hole": false, 00:17:32.565 "seek_data": false, 00:17:32.565 "copy": true, 00:17:32.565 "nvme_iov_md": false 00:17:32.565 }, 00:17:32.565 "memory_domains": [ 00:17:32.565 { 00:17:32.565 "dma_device_id": "system", 00:17:32.565 "dma_device_type": 1 00:17:32.565 }, 00:17:32.565 { 00:17:32.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.565 "dma_device_type": 2 00:17:32.565 } 00:17:32.565 ], 00:17:32.565 "driver_specific": {} 00:17:32.565 } 00:17:32.565 ] 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.565 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.822 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.822 "name": "Existed_Raid", 00:17:32.822 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:32.822 "strip_size_kb": 64, 00:17:32.822 "state": "online", 00:17:32.822 "raid_level": "raid0", 00:17:32.822 "superblock": true, 00:17:32.822 "num_base_bdevs": 3, 00:17:32.822 "num_base_bdevs_discovered": 3, 00:17:32.822 "num_base_bdevs_operational": 3, 00:17:32.822 "base_bdevs_list": [ 00:17:32.822 { 00:17:32.822 "name": "NewBaseBdev", 00:17:32.822 "uuid": "b8ade0f4-350d-469d-a674-3dce7b63e35c", 00:17:32.822 "is_configured": true, 00:17:32.822 "data_offset": 2048, 00:17:32.822 "data_size": 63488 00:17:32.822 }, 00:17:32.822 { 00:17:32.822 "name": "BaseBdev2", 00:17:32.822 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:32.822 "is_configured": true, 00:17:32.822 "data_offset": 2048, 00:17:32.822 "data_size": 63488 00:17:32.822 }, 00:17:32.822 { 00:17:32.822 "name": "BaseBdev3", 00:17:32.822 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:32.822 "is_configured": true, 00:17:32.822 "data_offset": 2048, 00:17:32.822 "data_size": 63488 00:17:32.822 } 00:17:32.822 ] 00:17:32.822 }' 00:17:32.822 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.822 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.080 [2024-11-20 05:28:04.719959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.080 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.080 "name": "Existed_Raid", 00:17:33.080 "aliases": [ 00:17:33.080 "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58" 00:17:33.080 ], 00:17:33.080 "product_name": "Raid Volume", 00:17:33.080 "block_size": 512, 00:17:33.080 "num_blocks": 190464, 00:17:33.080 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:33.080 "assigned_rate_limits": { 00:17:33.080 "rw_ios_per_sec": 0, 00:17:33.080 "rw_mbytes_per_sec": 0, 00:17:33.080 "r_mbytes_per_sec": 0, 00:17:33.080 "w_mbytes_per_sec": 0 00:17:33.080 }, 00:17:33.080 "claimed": false, 00:17:33.080 "zoned": false, 00:17:33.080 "supported_io_types": { 00:17:33.080 "read": true, 00:17:33.080 "write": true, 00:17:33.080 "unmap": true, 00:17:33.080 "flush": true, 00:17:33.080 "reset": true, 00:17:33.080 "nvme_admin": false, 00:17:33.080 "nvme_io": false, 00:17:33.080 "nvme_io_md": false, 00:17:33.080 "write_zeroes": true, 00:17:33.080 "zcopy": false, 00:17:33.080 "get_zone_info": false, 00:17:33.080 "zone_management": false, 00:17:33.080 "zone_append": false, 00:17:33.080 "compare": false, 00:17:33.080 "compare_and_write": false, 00:17:33.080 "abort": false, 00:17:33.080 "seek_hole": false, 00:17:33.080 "seek_data": false, 00:17:33.080 "copy": false, 00:17:33.080 "nvme_iov_md": false 00:17:33.081 }, 00:17:33.081 "memory_domains": [ 00:17:33.081 { 00:17:33.081 "dma_device_id": "system", 00:17:33.081 "dma_device_type": 1 00:17:33.081 }, 00:17:33.081 { 00:17:33.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.081 "dma_device_type": 2 00:17:33.081 }, 00:17:33.081 { 00:17:33.081 "dma_device_id": "system", 00:17:33.081 "dma_device_type": 1 00:17:33.081 }, 00:17:33.081 { 00:17:33.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.081 "dma_device_type": 2 00:17:33.081 }, 00:17:33.081 { 00:17:33.081 "dma_device_id": "system", 00:17:33.081 "dma_device_type": 1 00:17:33.081 }, 00:17:33.081 { 00:17:33.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.081 "dma_device_type": 2 00:17:33.081 } 00:17:33.081 ], 00:17:33.081 "driver_specific": { 00:17:33.081 "raid": { 00:17:33.081 "uuid": "bbf0173f-aaeb-4c05-ba99-91e2a2d03a58", 00:17:33.081 "strip_size_kb": 64, 00:17:33.081 "state": "online", 00:17:33.081 "raid_level": "raid0", 00:17:33.081 "superblock": true, 00:17:33.081 "num_base_bdevs": 3, 00:17:33.081 "num_base_bdevs_discovered": 3, 00:17:33.081 "num_base_bdevs_operational": 3, 00:17:33.081 "base_bdevs_list": [ 00:17:33.081 { 00:17:33.081 "name": "NewBaseBdev", 00:17:33.081 "uuid": "b8ade0f4-350d-469d-a674-3dce7b63e35c", 00:17:33.081 "is_configured": true, 00:17:33.081 "data_offset": 2048, 00:17:33.081 "data_size": 63488 00:17:33.081 }, 00:17:33.081 { 00:17:33.081 "name": "BaseBdev2", 00:17:33.081 "uuid": "f473d4be-1844-438a-bd4c-e3357326c7c4", 00:17:33.081 "is_configured": true, 00:17:33.081 "data_offset": 2048, 00:17:33.081 "data_size": 63488 00:17:33.081 }, 00:17:33.081 { 00:17:33.081 "name": "BaseBdev3", 00:17:33.081 "uuid": "779a0608-2393-4be1-8b8a-488924549a6e", 00:17:33.081 "is_configured": true, 00:17:33.081 "data_offset": 2048, 00:17:33.081 "data_size": 63488 00:17:33.081 } 00:17:33.081 ] 00:17:33.081 } 00:17:33.081 } 00:17:33.081 }' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:33.081 BaseBdev2 00:17:33.081 BaseBdev3' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.081 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.339 [2024-11-20 05:28:04.939698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.339 [2024-11-20 05:28:04.939725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.339 [2024-11-20 05:28:04.939823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.339 [2024-11-20 05:28:04.939886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.339 [2024-11-20 05:28:04.939897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63054 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63054 ']' 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63054 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63054 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:33.339 killing process with pid 63054 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63054' 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63054 00:17:33.339 05:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63054 00:17:33.339 [2024-11-20 05:28:04.969788] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.339 [2024-11-20 05:28:05.130226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.273 05:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:34.273 00:17:34.273 real 0m7.817s 00:17:34.273 user 0m12.565s 00:17:34.273 sys 0m1.299s 00:17:34.273 05:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:34.273 ************************************ 00:17:34.273 END TEST raid_state_function_test_sb 00:17:34.273 ************************************ 00:17:34.273 05:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.273 05:28:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:34.273 05:28:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:34.273 05:28:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:34.273 05:28:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.273 ************************************ 00:17:34.273 START TEST raid_superblock_test 00:17:34.273 ************************************ 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:34.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63647 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63647 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63647 ']' 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:34.273 05:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.273 [2024-11-20 05:28:05.860752] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:34.273 [2024-11-20 05:28:05.861099] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63647 ] 00:17:34.273 [2024-11-20 05:28:06.014664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.532 [2024-11-20 05:28:06.118349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.532 [2024-11-20 05:28:06.241955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.532 [2024-11-20 05:28:06.242980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.099 malloc1 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.099 [2024-11-20 05:28:06.730785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.099 [2024-11-20 05:28:06.731010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.099 [2024-11-20 05:28:06.731037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:35.099 [2024-11-20 05:28:06.731046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.099 [2024-11-20 05:28:06.732996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.099 [2024-11-20 05:28:06.733024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.099 pt1 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.099 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.100 malloc2 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.100 [2024-11-20 05:28:06.768767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.100 [2024-11-20 05:28:06.768838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.100 [2024-11-20 05:28:06.768862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:35.100 [2024-11-20 05:28:06.768870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.100 [2024-11-20 05:28:06.770833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.100 [2024-11-20 05:28:06.770883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.100 pt2 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.100 malloc3 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.100 [2024-11-20 05:28:06.826575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:35.100 [2024-11-20 05:28:06.826851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.100 [2024-11-20 05:28:06.826884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:35.100 [2024-11-20 05:28:06.826892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.100 [2024-11-20 05:28:06.829002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.100 [2024-11-20 05:28:06.829039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:35.100 pt3 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.100 [2024-11-20 05:28:06.834630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.100 [2024-11-20 05:28:06.836394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.100 [2024-11-20 05:28:06.836455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:35.100 [2024-11-20 05:28:06.836610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:35.100 [2024-11-20 05:28:06.836621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:35.100 [2024-11-20 05:28:06.836900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:35.100 [2024-11-20 05:28:06.837043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:35.100 [2024-11-20 05:28:06.837050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:35.100 [2024-11-20 05:28:06.837198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.100 "name": "raid_bdev1", 00:17:35.100 "uuid": "23da9544-6db3-4bbf-8384-93db4db5f1b9", 00:17:35.100 "strip_size_kb": 64, 00:17:35.100 "state": "online", 00:17:35.100 "raid_level": "raid0", 00:17:35.100 "superblock": true, 00:17:35.100 "num_base_bdevs": 3, 00:17:35.100 "num_base_bdevs_discovered": 3, 00:17:35.100 "num_base_bdevs_operational": 3, 00:17:35.100 "base_bdevs_list": [ 00:17:35.100 { 00:17:35.100 "name": "pt1", 00:17:35.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.100 "is_configured": true, 00:17:35.100 "data_offset": 2048, 00:17:35.100 "data_size": 63488 00:17:35.100 }, 00:17:35.100 { 00:17:35.100 "name": "pt2", 00:17:35.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.100 "is_configured": true, 00:17:35.100 "data_offset": 2048, 00:17:35.100 "data_size": 63488 00:17:35.100 }, 00:17:35.100 { 00:17:35.100 "name": "pt3", 00:17:35.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:35.100 "is_configured": true, 00:17:35.100 "data_offset": 2048, 00:17:35.100 "data_size": 63488 00:17:35.100 } 00:17:35.100 ] 00:17:35.100 }' 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.100 05:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.358 [2024-11-20 05:28:07.167053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.358 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:35.617 "name": "raid_bdev1", 00:17:35.617 "aliases": [ 00:17:35.617 "23da9544-6db3-4bbf-8384-93db4db5f1b9" 00:17:35.617 ], 00:17:35.617 "product_name": "Raid Volume", 00:17:35.617 "block_size": 512, 00:17:35.617 "num_blocks": 190464, 00:17:35.617 "uuid": "23da9544-6db3-4bbf-8384-93db4db5f1b9", 00:17:35.617 "assigned_rate_limits": { 00:17:35.617 "rw_ios_per_sec": 0, 00:17:35.617 "rw_mbytes_per_sec": 0, 00:17:35.617 "r_mbytes_per_sec": 0, 00:17:35.617 "w_mbytes_per_sec": 0 00:17:35.617 }, 00:17:35.617 "claimed": false, 00:17:35.617 "zoned": false, 00:17:35.617 "supported_io_types": { 00:17:35.617 "read": true, 00:17:35.617 "write": true, 00:17:35.617 "unmap": true, 00:17:35.617 "flush": true, 00:17:35.617 "reset": true, 00:17:35.617 "nvme_admin": false, 00:17:35.617 "nvme_io": false, 00:17:35.617 "nvme_io_md": false, 00:17:35.617 "write_zeroes": true, 00:17:35.617 "zcopy": false, 00:17:35.617 "get_zone_info": false, 00:17:35.617 "zone_management": false, 00:17:35.617 "zone_append": false, 00:17:35.617 "compare": false, 00:17:35.617 "compare_and_write": false, 00:17:35.617 "abort": false, 00:17:35.617 "seek_hole": false, 00:17:35.617 "seek_data": false, 00:17:35.617 "copy": false, 00:17:35.617 "nvme_iov_md": false 00:17:35.617 }, 00:17:35.617 "memory_domains": [ 00:17:35.617 { 00:17:35.617 "dma_device_id": "system", 00:17:35.617 "dma_device_type": 1 00:17:35.617 }, 00:17:35.617 { 00:17:35.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.617 "dma_device_type": 2 00:17:35.617 }, 00:17:35.617 { 00:17:35.617 "dma_device_id": "system", 00:17:35.617 "dma_device_type": 1 00:17:35.617 }, 00:17:35.617 { 00:17:35.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.617 "dma_device_type": 2 00:17:35.617 }, 00:17:35.617 { 00:17:35.617 "dma_device_id": "system", 00:17:35.617 "dma_device_type": 1 00:17:35.617 }, 00:17:35.617 { 00:17:35.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.617 "dma_device_type": 2 00:17:35.617 } 00:17:35.617 ], 00:17:35.617 "driver_specific": { 00:17:35.617 "raid": { 00:17:35.617 "uuid": "23da9544-6db3-4bbf-8384-93db4db5f1b9", 00:17:35.617 "strip_size_kb": 64, 00:17:35.617 "state": "online", 00:17:35.617 "raid_level": "raid0", 00:17:35.617 "superblock": true, 00:17:35.617 "num_base_bdevs": 3, 00:17:35.617 "num_base_bdevs_discovered": 3, 00:17:35.617 "num_base_bdevs_operational": 3, 00:17:35.617 "base_bdevs_list": [ 00:17:35.617 { 00:17:35.617 "name": "pt1", 00:17:35.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.617 "is_configured": true, 00:17:35.617 "data_offset": 2048, 00:17:35.617 "data_size": 63488 00:17:35.617 }, 00:17:35.617 { 00:17:35.617 "name": "pt2", 00:17:35.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.617 "is_configured": true, 00:17:35.617 "data_offset": 2048, 00:17:35.617 "data_size": 63488 00:17:35.617 }, 00:17:35.617 { 00:17:35.617 "name": "pt3", 00:17:35.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:35.617 "is_configured": true, 00:17:35.617 "data_offset": 2048, 00:17:35.617 "data_size": 63488 00:17:35.617 } 00:17:35.617 ] 00:17:35.617 } 00:17:35.617 } 00:17:35.617 }' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:35.617 pt2 00:17:35.617 pt3' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.617 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:35.618 [2024-11-20 05:28:07.355039] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=23da9544-6db3-4bbf-8384-93db4db5f1b9 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 23da9544-6db3-4bbf-8384-93db4db5f1b9 ']' 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.618 [2024-11-20 05:28:07.386790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.618 [2024-11-20 05:28:07.386826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.618 [2024-11-20 05:28:07.386908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.618 [2024-11-20 05:28:07.386976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.618 [2024-11-20 05:28:07.386986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.618 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.876 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.876 [2024-11-20 05:28:07.494884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:35.876 [2024-11-20 05:28:07.496687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:35.876 [2024-11-20 05:28:07.496749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:35.876 [2024-11-20 05:28:07.496801] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:35.876 [2024-11-20 05:28:07.496858] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:35.876 [2024-11-20 05:28:07.496875] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:35.876 [2024-11-20 05:28:07.496889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.876 [2024-11-20 05:28:07.496904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:35.876 request: 00:17:35.876 { 00:17:35.876 "name": "raid_bdev1", 00:17:35.876 "raid_level": "raid0", 00:17:35.876 "base_bdevs": [ 00:17:35.877 "malloc1", 00:17:35.877 "malloc2", 00:17:35.877 "malloc3" 00:17:35.877 ], 00:17:35.877 "strip_size_kb": 64, 00:17:35.877 "superblock": false, 00:17:35.877 "method": "bdev_raid_create", 00:17:35.877 "req_id": 1 00:17:35.877 } 00:17:35.877 Got JSON-RPC error response 00:17:35.877 response: 00:17:35.877 { 00:17:35.877 "code": -17, 00:17:35.877 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:35.877 } 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.877 [2024-11-20 05:28:07.542838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.877 [2024-11-20 05:28:07.543095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.877 [2024-11-20 05:28:07.543165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:35.877 [2024-11-20 05:28:07.543544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.877 [2024-11-20 05:28:07.545724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.877 [2024-11-20 05:28:07.545829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.877 [2024-11-20 05:28:07.545976] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:35.877 [2024-11-20 05:28:07.546072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.877 pt1 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.877 "name": "raid_bdev1", 00:17:35.877 "uuid": "23da9544-6db3-4bbf-8384-93db4db5f1b9", 00:17:35.877 "strip_size_kb": 64, 00:17:35.877 "state": "configuring", 00:17:35.877 "raid_level": "raid0", 00:17:35.877 "superblock": true, 00:17:35.877 "num_base_bdevs": 3, 00:17:35.877 "num_base_bdevs_discovered": 1, 00:17:35.877 "num_base_bdevs_operational": 3, 00:17:35.877 "base_bdevs_list": [ 00:17:35.877 { 00:17:35.877 "name": "pt1", 00:17:35.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.877 "is_configured": true, 00:17:35.877 "data_offset": 2048, 00:17:35.877 "data_size": 63488 00:17:35.877 }, 00:17:35.877 { 00:17:35.877 "name": null, 00:17:35.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.877 "is_configured": false, 00:17:35.877 "data_offset": 2048, 00:17:35.877 "data_size": 63488 00:17:35.877 }, 00:17:35.877 { 00:17:35.877 "name": null, 00:17:35.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:35.877 "is_configured": false, 00:17:35.877 "data_offset": 2048, 00:17:35.877 "data_size": 63488 00:17:35.877 } 00:17:35.877 ] 00:17:35.877 }' 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.877 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.135 [2024-11-20 05:28:07.878908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.135 [2024-11-20 05:28:07.878998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.135 [2024-11-20 05:28:07.879020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:36.135 [2024-11-20 05:28:07.879028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.135 [2024-11-20 05:28:07.879474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.135 [2024-11-20 05:28:07.879487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.135 [2024-11-20 05:28:07.879570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:36.135 [2024-11-20 05:28:07.879590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.135 pt2 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.135 [2024-11-20 05:28:07.886930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.135 "name": "raid_bdev1", 00:17:36.135 "uuid": "23da9544-6db3-4bbf-8384-93db4db5f1b9", 00:17:36.135 "strip_size_kb": 64, 00:17:36.135 "state": "configuring", 00:17:36.135 "raid_level": "raid0", 00:17:36.135 "superblock": true, 00:17:36.135 "num_base_bdevs": 3, 00:17:36.135 "num_base_bdevs_discovered": 1, 00:17:36.135 "num_base_bdevs_operational": 3, 00:17:36.135 "base_bdevs_list": [ 00:17:36.135 { 00:17:36.135 "name": "pt1", 00:17:36.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.135 "is_configured": true, 00:17:36.135 "data_offset": 2048, 00:17:36.135 "data_size": 63488 00:17:36.135 }, 00:17:36.135 { 00:17:36.135 "name": null, 00:17:36.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.135 "is_configured": false, 00:17:36.135 "data_offset": 0, 00:17:36.135 "data_size": 63488 00:17:36.135 }, 00:17:36.135 { 00:17:36.135 "name": null, 00:17:36.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.135 "is_configured": false, 00:17:36.135 "data_offset": 2048, 00:17:36.135 "data_size": 63488 00:17:36.135 } 00:17:36.135 ] 00:17:36.135 }' 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.135 05:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.701 [2024-11-20 05:28:08.238936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.701 [2024-11-20 05:28:08.239021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.701 [2024-11-20 05:28:08.239040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:36.701 [2024-11-20 05:28:08.239051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.701 [2024-11-20 05:28:08.239510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.701 [2024-11-20 05:28:08.239526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.701 [2024-11-20 05:28:08.239606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:36.701 [2024-11-20 05:28:08.239627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.701 pt2 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.701 [2024-11-20 05:28:08.246924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:36.701 [2024-11-20 05:28:08.246985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.701 [2024-11-20 05:28:08.247000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:36.701 [2024-11-20 05:28:08.247010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.701 [2024-11-20 05:28:08.247412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.701 [2024-11-20 05:28:08.247440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:36.701 [2024-11-20 05:28:08.247509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:36.701 [2024-11-20 05:28:08.247530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:36.701 [2024-11-20 05:28:08.247644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:36.701 [2024-11-20 05:28:08.247653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:36.701 [2024-11-20 05:28:08.247909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:36.701 [2024-11-20 05:28:08.248031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:36.701 [2024-11-20 05:28:08.248037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:36.701 [2024-11-20 05:28:08.248153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.701 pt3 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.701 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.701 "name": "raid_bdev1", 00:17:36.701 "uuid": "23da9544-6db3-4bbf-8384-93db4db5f1b9", 00:17:36.701 "strip_size_kb": 64, 00:17:36.701 "state": "online", 00:17:36.701 "raid_level": "raid0", 00:17:36.701 "superblock": true, 00:17:36.701 "num_base_bdevs": 3, 00:17:36.701 "num_base_bdevs_discovered": 3, 00:17:36.701 "num_base_bdevs_operational": 3, 00:17:36.701 "base_bdevs_list": [ 00:17:36.701 { 00:17:36.701 "name": "pt1", 00:17:36.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.701 "is_configured": true, 00:17:36.701 "data_offset": 2048, 00:17:36.702 "data_size": 63488 00:17:36.702 }, 00:17:36.702 { 00:17:36.702 "name": "pt2", 00:17:36.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.702 "is_configured": true, 00:17:36.702 "data_offset": 2048, 00:17:36.702 "data_size": 63488 00:17:36.702 }, 00:17:36.702 { 00:17:36.702 "name": "pt3", 00:17:36.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.702 "is_configured": true, 00:17:36.702 "data_offset": 2048, 00:17:36.702 "data_size": 63488 00:17:36.702 } 00:17:36.702 ] 00:17:36.702 }' 00:17:36.702 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.702 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.959 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:36.959 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:36.959 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.960 [2024-11-20 05:28:08.555285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:36.960 "name": "raid_bdev1", 00:17:36.960 "aliases": [ 00:17:36.960 "23da9544-6db3-4bbf-8384-93db4db5f1b9" 00:17:36.960 ], 00:17:36.960 "product_name": "Raid Volume", 00:17:36.960 "block_size": 512, 00:17:36.960 "num_blocks": 190464, 00:17:36.960 "uuid": "23da9544-6db3-4bbf-8384-93db4db5f1b9", 00:17:36.960 "assigned_rate_limits": { 00:17:36.960 "rw_ios_per_sec": 0, 00:17:36.960 "rw_mbytes_per_sec": 0, 00:17:36.960 "r_mbytes_per_sec": 0, 00:17:36.960 "w_mbytes_per_sec": 0 00:17:36.960 }, 00:17:36.960 "claimed": false, 00:17:36.960 "zoned": false, 00:17:36.960 "supported_io_types": { 00:17:36.960 "read": true, 00:17:36.960 "write": true, 00:17:36.960 "unmap": true, 00:17:36.960 "flush": true, 00:17:36.960 "reset": true, 00:17:36.960 "nvme_admin": false, 00:17:36.960 "nvme_io": false, 00:17:36.960 "nvme_io_md": false, 00:17:36.960 "write_zeroes": true, 00:17:36.960 "zcopy": false, 00:17:36.960 "get_zone_info": false, 00:17:36.960 "zone_management": false, 00:17:36.960 "zone_append": false, 00:17:36.960 "compare": false, 00:17:36.960 "compare_and_write": false, 00:17:36.960 "abort": false, 00:17:36.960 "seek_hole": false, 00:17:36.960 "seek_data": false, 00:17:36.960 "copy": false, 00:17:36.960 "nvme_iov_md": false 00:17:36.960 }, 00:17:36.960 "memory_domains": [ 00:17:36.960 { 00:17:36.960 "dma_device_id": "system", 00:17:36.960 "dma_device_type": 1 00:17:36.960 }, 00:17:36.960 { 00:17:36.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.960 "dma_device_type": 2 00:17:36.960 }, 00:17:36.960 { 00:17:36.960 "dma_device_id": "system", 00:17:36.960 "dma_device_type": 1 00:17:36.960 }, 00:17:36.960 { 00:17:36.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.960 "dma_device_type": 2 00:17:36.960 }, 00:17:36.960 { 00:17:36.960 "dma_device_id": "system", 00:17:36.960 "dma_device_type": 1 00:17:36.960 }, 00:17:36.960 { 00:17:36.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.960 "dma_device_type": 2 00:17:36.960 } 00:17:36.960 ], 00:17:36.960 "driver_specific": { 00:17:36.960 "raid": { 00:17:36.960 "uuid": "23da9544-6db3-4bbf-8384-93db4db5f1b9", 00:17:36.960 "strip_size_kb": 64, 00:17:36.960 "state": "online", 00:17:36.960 "raid_level": "raid0", 00:17:36.960 "superblock": true, 00:17:36.960 "num_base_bdevs": 3, 00:17:36.960 "num_base_bdevs_discovered": 3, 00:17:36.960 "num_base_bdevs_operational": 3, 00:17:36.960 "base_bdevs_list": [ 00:17:36.960 { 00:17:36.960 "name": "pt1", 00:17:36.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.960 "is_configured": true, 00:17:36.960 "data_offset": 2048, 00:17:36.960 "data_size": 63488 00:17:36.960 }, 00:17:36.960 { 00:17:36.960 "name": "pt2", 00:17:36.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.960 "is_configured": true, 00:17:36.960 "data_offset": 2048, 00:17:36.960 "data_size": 63488 00:17:36.960 }, 00:17:36.960 { 00:17:36.960 "name": "pt3", 00:17:36.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.960 "is_configured": true, 00:17:36.960 "data_offset": 2048, 00:17:36.960 "data_size": 63488 00:17:36.960 } 00:17:36.960 ] 00:17:36.960 } 00:17:36.960 } 00:17:36.960 }' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:36.960 pt2 00:17:36.960 pt3' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.960 [2024-11-20 05:28:08.743277] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 23da9544-6db3-4bbf-8384-93db4db5f1b9 '!=' 23da9544-6db3-4bbf-8384-93db4db5f1b9 ']' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63647 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63647 ']' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63647 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:36.960 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63647 00:17:37.218 killing process with pid 63647 00:17:37.218 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:37.218 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:37.218 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63647' 00:17:37.218 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63647 00:17:37.218 [2024-11-20 05:28:08.801685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.218 [2024-11-20 05:28:08.801791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.218 05:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63647 00:17:37.218 [2024-11-20 05:28:08.801852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.218 [2024-11-20 05:28:08.801865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:37.218 [2024-11-20 05:28:08.959312] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.792 05:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:37.792 00:17:37.792 real 0m3.769s 00:17:37.792 user 0m5.434s 00:17:37.792 sys 0m0.636s 00:17:37.792 05:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:37.792 05:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.792 ************************************ 00:17:37.792 END TEST raid_superblock_test 00:17:37.792 ************************************ 00:17:37.792 05:28:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:17:37.792 05:28:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:37.792 05:28:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:37.792 05:28:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.792 ************************************ 00:17:37.792 START TEST raid_read_error_test 00:17:37.792 ************************************ 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:37.792 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p8ptZH45NC 00:17:38.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.051 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63889 00:17:38.051 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63889 00:17:38.051 05:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63889 ']' 00:17:38.051 05:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.051 05:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:38.051 05:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.051 05:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:38.051 05:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:38.051 05:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.051 [2024-11-20 05:28:09.687840] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:38.051 [2024-11-20 05:28:09.688332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63889 ] 00:17:38.051 [2024-11-20 05:28:09.843996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.370 [2024-11-20 05:28:09.948686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.370 [2024-11-20 05:28:10.074872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.370 [2024-11-20 05:28:10.074924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 BaseBdev1_malloc 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 true 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 [2024-11-20 05:28:10.586042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:38.989 [2024-11-20 05:28:10.586118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.989 [2024-11-20 05:28:10.586139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:38.989 [2024-11-20 05:28:10.586149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.989 [2024-11-20 05:28:10.588143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.989 [2024-11-20 05:28:10.588190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.989 BaseBdev1 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 BaseBdev2_malloc 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 true 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 [2024-11-20 05:28:10.628141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:38.989 [2024-11-20 05:28:10.628220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.989 [2024-11-20 05:28:10.628237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:38.989 [2024-11-20 05:28:10.628246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.989 [2024-11-20 05:28:10.630287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.989 [2024-11-20 05:28:10.630337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:38.989 BaseBdev2 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 BaseBdev3_malloc 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 true 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 [2024-11-20 05:28:10.690454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:38.989 [2024-11-20 05:28:10.690897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.989 [2024-11-20 05:28:10.690926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:38.989 [2024-11-20 05:28:10.690936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.989 [2024-11-20 05:28:10.692981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.989 [2024-11-20 05:28:10.693026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:38.989 BaseBdev3 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 [2024-11-20 05:28:10.698547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.989 [2024-11-20 05:28:10.700290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.989 [2024-11-20 05:28:10.700378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.989 [2024-11-20 05:28:10.700567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:38.989 [2024-11-20 05:28:10.700582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:38.989 [2024-11-20 05:28:10.700841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:38.989 [2024-11-20 05:28:10.700982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:38.989 [2024-11-20 05:28:10.700993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:38.989 [2024-11-20 05:28:10.701138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.989 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.990 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.990 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.990 "name": "raid_bdev1", 00:17:38.990 "uuid": "32f348da-e91e-44df-ad1b-c12fd1b59bdb", 00:17:38.990 "strip_size_kb": 64, 00:17:38.990 "state": "online", 00:17:38.990 "raid_level": "raid0", 00:17:38.990 "superblock": true, 00:17:38.990 "num_base_bdevs": 3, 00:17:38.990 "num_base_bdevs_discovered": 3, 00:17:38.990 "num_base_bdevs_operational": 3, 00:17:38.990 "base_bdevs_list": [ 00:17:38.990 { 00:17:38.990 "name": "BaseBdev1", 00:17:38.990 "uuid": "1255c193-dd31-528d-9dc5-bfe69cd394b4", 00:17:38.990 "is_configured": true, 00:17:38.990 "data_offset": 2048, 00:17:38.990 "data_size": 63488 00:17:38.990 }, 00:17:38.990 { 00:17:38.990 "name": "BaseBdev2", 00:17:38.990 "uuid": "d1b2e1ff-5049-54b0-9d46-47f1a76847c5", 00:17:38.990 "is_configured": true, 00:17:38.990 "data_offset": 2048, 00:17:38.990 "data_size": 63488 00:17:38.990 }, 00:17:38.990 { 00:17:38.990 "name": "BaseBdev3", 00:17:38.990 "uuid": "5a95e334-be0a-5950-9074-a1ae49e39657", 00:17:38.990 "is_configured": true, 00:17:38.990 "data_offset": 2048, 00:17:38.990 "data_size": 63488 00:17:38.990 } 00:17:38.990 ] 00:17:38.990 }' 00:17:38.990 05:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.990 05:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.248 05:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:39.248 05:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:39.507 [2024-11-20 05:28:11.103469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.463 "name": "raid_bdev1", 00:17:40.463 "uuid": "32f348da-e91e-44df-ad1b-c12fd1b59bdb", 00:17:40.463 "strip_size_kb": 64, 00:17:40.463 "state": "online", 00:17:40.463 "raid_level": "raid0", 00:17:40.463 "superblock": true, 00:17:40.463 "num_base_bdevs": 3, 00:17:40.463 "num_base_bdevs_discovered": 3, 00:17:40.463 "num_base_bdevs_operational": 3, 00:17:40.463 "base_bdevs_list": [ 00:17:40.463 { 00:17:40.463 "name": "BaseBdev1", 00:17:40.463 "uuid": "1255c193-dd31-528d-9dc5-bfe69cd394b4", 00:17:40.463 "is_configured": true, 00:17:40.463 "data_offset": 2048, 00:17:40.463 "data_size": 63488 00:17:40.463 }, 00:17:40.463 { 00:17:40.463 "name": "BaseBdev2", 00:17:40.463 "uuid": "d1b2e1ff-5049-54b0-9d46-47f1a76847c5", 00:17:40.463 "is_configured": true, 00:17:40.463 "data_offset": 2048, 00:17:40.463 "data_size": 63488 00:17:40.463 }, 00:17:40.463 { 00:17:40.463 "name": "BaseBdev3", 00:17:40.463 "uuid": "5a95e334-be0a-5950-9074-a1ae49e39657", 00:17:40.463 "is_configured": true, 00:17:40.463 "data_offset": 2048, 00:17:40.463 "data_size": 63488 00:17:40.463 } 00:17:40.463 ] 00:17:40.463 }' 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.463 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.721 [2024-11-20 05:28:12.340741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.721 [2024-11-20 05:28:12.340781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.721 [2024-11-20 05:28:12.343209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.721 [2024-11-20 05:28:12.343257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.721 [2024-11-20 05:28:12.343292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.721 [2024-11-20 05:28:12.343300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:40.721 { 00:17:40.721 "results": [ 00:17:40.721 { 00:17:40.721 "job": "raid_bdev1", 00:17:40.721 "core_mask": "0x1", 00:17:40.721 "workload": "randrw", 00:17:40.721 "percentage": 50, 00:17:40.721 "status": "finished", 00:17:40.721 "queue_depth": 1, 00:17:40.721 "io_size": 131072, 00:17:40.721 "runtime": 1.235558, 00:17:40.721 "iops": 16709.049676340568, 00:17:40.721 "mibps": 2088.631209542571, 00:17:40.721 "io_failed": 1, 00:17:40.721 "io_timeout": 0, 00:17:40.721 "avg_latency_us": 82.89051334212625, 00:17:40.721 "min_latency_us": 26.19076923076923, 00:17:40.721 "max_latency_us": 1329.6246153846155 00:17:40.721 } 00:17:40.721 ], 00:17:40.721 "core_count": 1 00:17:40.721 } 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63889 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63889 ']' 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63889 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63889 00:17:40.721 killing process with pid 63889 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63889' 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63889 00:17:40.721 05:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63889 00:17:40.721 [2024-11-20 05:28:12.374241] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.721 [2024-11-20 05:28:12.496432] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p8ptZH45NC 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:17:41.656 ************************************ 00:17:41.656 END TEST raid_read_error_test 00:17:41.656 ************************************ 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:17:41.656 00:17:41.656 real 0m3.526s 00:17:41.656 user 0m4.160s 00:17:41.656 sys 0m0.450s 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:41.656 05:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.656 05:28:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:17:41.656 05:28:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:41.656 05:28:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:41.656 05:28:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.656 ************************************ 00:17:41.656 START TEST raid_write_error_test 00:17:41.656 ************************************ 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:41.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GjWE7az49X 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64018 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64018 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 64018 ']' 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.656 05:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:41.656 [2024-11-20 05:28:13.262172] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:41.656 [2024-11-20 05:28:13.263037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64018 ] 00:17:41.656 [2024-11-20 05:28:13.426417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.915 [2024-11-20 05:28:13.544785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.915 [2024-11-20 05:28:13.691943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.915 [2024-11-20 05:28:13.692223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 BaseBdev1_malloc 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 true 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 [2024-11-20 05:28:14.153208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:42.485 [2024-11-20 05:28:14.153277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.485 [2024-11-20 05:28:14.153297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:42.485 [2024-11-20 05:28:14.153310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.485 [2024-11-20 05:28:14.155630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.485 [2024-11-20 05:28:14.155669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:42.485 BaseBdev1 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 BaseBdev2_malloc 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 true 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 [2024-11-20 05:28:14.199614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:42.485 [2024-11-20 05:28:14.199682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.485 [2024-11-20 05:28:14.199700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:42.485 [2024-11-20 05:28:14.199712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.485 [2024-11-20 05:28:14.201954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.485 [2024-11-20 05:28:14.201990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:42.485 BaseBdev2 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 BaseBdev3_malloc 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 true 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.486 [2024-11-20 05:28:14.261952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:42.486 [2024-11-20 05:28:14.262020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.486 [2024-11-20 05:28:14.262041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:42.486 [2024-11-20 05:28:14.262053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.486 [2024-11-20 05:28:14.264393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.486 [2024-11-20 05:28:14.264433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:42.486 BaseBdev3 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.486 [2024-11-20 05:28:14.270037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.486 [2024-11-20 05:28:14.272098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.486 [2024-11-20 05:28:14.272189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.486 [2024-11-20 05:28:14.272439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:42.486 [2024-11-20 05:28:14.272454] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:42.486 [2024-11-20 05:28:14.272740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:42.486 [2024-11-20 05:28:14.272919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:42.486 [2024-11-20 05:28:14.272933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:42.486 [2024-11-20 05:28:14.273094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.486 "name": "raid_bdev1", 00:17:42.486 "uuid": "b5dcce24-ae41-432f-900e-bc807b9c14b3", 00:17:42.486 "strip_size_kb": 64, 00:17:42.486 "state": "online", 00:17:42.486 "raid_level": "raid0", 00:17:42.486 "superblock": true, 00:17:42.486 "num_base_bdevs": 3, 00:17:42.486 "num_base_bdevs_discovered": 3, 00:17:42.486 "num_base_bdevs_operational": 3, 00:17:42.486 "base_bdevs_list": [ 00:17:42.486 { 00:17:42.486 "name": "BaseBdev1", 00:17:42.486 "uuid": "be5d04df-2d5b-5918-a51e-264a57dda14e", 00:17:42.486 "is_configured": true, 00:17:42.486 "data_offset": 2048, 00:17:42.486 "data_size": 63488 00:17:42.486 }, 00:17:42.486 { 00:17:42.486 "name": "BaseBdev2", 00:17:42.486 "uuid": "c4050976-e8d0-552b-892d-e757fa511d2a", 00:17:42.486 "is_configured": true, 00:17:42.486 "data_offset": 2048, 00:17:42.486 "data_size": 63488 00:17:42.486 }, 00:17:42.486 { 00:17:42.486 "name": "BaseBdev3", 00:17:42.486 "uuid": "4e37c303-58f7-51b9-ac21-d74a3087fef5", 00:17:42.486 "is_configured": true, 00:17:42.486 "data_offset": 2048, 00:17:42.486 "data_size": 63488 00:17:42.486 } 00:17:42.486 ] 00:17:42.486 }' 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.486 05:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.054 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:43.054 05:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:43.054 [2024-11-20 05:28:14.667181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.996 "name": "raid_bdev1", 00:17:43.996 "uuid": "b5dcce24-ae41-432f-900e-bc807b9c14b3", 00:17:43.996 "strip_size_kb": 64, 00:17:43.996 "state": "online", 00:17:43.996 "raid_level": "raid0", 00:17:43.996 "superblock": true, 00:17:43.996 "num_base_bdevs": 3, 00:17:43.996 "num_base_bdevs_discovered": 3, 00:17:43.996 "num_base_bdevs_operational": 3, 00:17:43.996 "base_bdevs_list": [ 00:17:43.996 { 00:17:43.996 "name": "BaseBdev1", 00:17:43.996 "uuid": "be5d04df-2d5b-5918-a51e-264a57dda14e", 00:17:43.996 "is_configured": true, 00:17:43.996 "data_offset": 2048, 00:17:43.996 "data_size": 63488 00:17:43.996 }, 00:17:43.996 { 00:17:43.996 "name": "BaseBdev2", 00:17:43.996 "uuid": "c4050976-e8d0-552b-892d-e757fa511d2a", 00:17:43.996 "is_configured": true, 00:17:43.996 "data_offset": 2048, 00:17:43.996 "data_size": 63488 00:17:43.996 }, 00:17:43.996 { 00:17:43.996 "name": "BaseBdev3", 00:17:43.996 "uuid": "4e37c303-58f7-51b9-ac21-d74a3087fef5", 00:17:43.996 "is_configured": true, 00:17:43.996 "data_offset": 2048, 00:17:43.996 "data_size": 63488 00:17:43.996 } 00:17:43.996 ] 00:17:43.996 }' 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.996 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.256 [2024-11-20 05:28:15.941429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.256 [2024-11-20 05:28:15.941636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.256 [2024-11-20 05:28:15.944743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.256 [2024-11-20 05:28:15.944901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.256 [2024-11-20 05:28:15.944952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.256 [2024-11-20 05:28:15.944962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.256 { 00:17:44.256 "results": [ 00:17:44.256 { 00:17:44.256 "job": "raid_bdev1", 00:17:44.256 "core_mask": "0x1", 00:17:44.256 "workload": "randrw", 00:17:44.256 "percentage": 50, 00:17:44.256 "status": "finished", 00:17:44.256 "queue_depth": 1, 00:17:44.256 "io_size": 131072, 00:17:44.256 "runtime": 1.272394, 00:17:44.256 "iops": 14110.409197151197, 00:17:44.256 "mibps": 1763.8011496438996, 00:17:44.256 "io_failed": 1, 00:17:44.256 "io_timeout": 0, 00:17:44.256 "avg_latency_us": 97.47013413876572, 00:17:44.256 "min_latency_us": 33.28, 00:17:44.256 "max_latency_us": 1688.8123076923077 00:17:44.256 } 00:17:44.256 ], 00:17:44.256 "core_count": 1 00:17:44.256 } 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64018 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 64018 ']' 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 64018 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64018 00:17:44.256 killing process with pid 64018 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64018' 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 64018 00:17:44.256 [2024-11-20 05:28:15.973644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.256 05:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 64018 00:17:44.515 [2024-11-20 05:28:16.124133] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GjWE7az49X 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:17:45.081 00:17:45.081 real 0m3.725s 00:17:45.081 user 0m4.383s 00:17:45.081 sys 0m0.442s 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:45.081 ************************************ 00:17:45.081 END TEST raid_write_error_test 00:17:45.081 ************************************ 00:17:45.081 05:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.340 05:28:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:45.340 05:28:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:45.340 05:28:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:45.340 05:28:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:45.340 05:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.340 ************************************ 00:17:45.340 START TEST raid_state_function_test 00:17:45.340 ************************************ 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:45.340 Process raid pid: 64156 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64156 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64156' 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64156 00:17:45.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 64156 ']' 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.340 05:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:45.340 [2024-11-20 05:28:17.023114] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:45.340 [2024-11-20 05:28:17.023252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.600 [2024-11-20 05:28:17.187809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.600 [2024-11-20 05:28:17.306950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.861 [2024-11-20 05:28:17.456529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.861 [2024-11-20 05:28:17.456567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.123 [2024-11-20 05:28:17.868607] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.123 [2024-11-20 05:28:17.868680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.123 [2024-11-20 05:28:17.868692] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.123 [2024-11-20 05:28:17.868703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.123 [2024-11-20 05:28:17.868709] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.123 [2024-11-20 05:28:17.868719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.123 "name": "Existed_Raid", 00:17:46.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.123 "strip_size_kb": 64, 00:17:46.123 "state": "configuring", 00:17:46.123 "raid_level": "concat", 00:17:46.123 "superblock": false, 00:17:46.123 "num_base_bdevs": 3, 00:17:46.123 "num_base_bdevs_discovered": 0, 00:17:46.123 "num_base_bdevs_operational": 3, 00:17:46.123 "base_bdevs_list": [ 00:17:46.123 { 00:17:46.123 "name": "BaseBdev1", 00:17:46.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.123 "is_configured": false, 00:17:46.123 "data_offset": 0, 00:17:46.123 "data_size": 0 00:17:46.123 }, 00:17:46.123 { 00:17:46.123 "name": "BaseBdev2", 00:17:46.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.123 "is_configured": false, 00:17:46.123 "data_offset": 0, 00:17:46.123 "data_size": 0 00:17:46.123 }, 00:17:46.123 { 00:17:46.123 "name": "BaseBdev3", 00:17:46.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.123 "is_configured": false, 00:17:46.123 "data_offset": 0, 00:17:46.123 "data_size": 0 00:17:46.123 } 00:17:46.123 ] 00:17:46.123 }' 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.123 05:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.384 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.384 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.384 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.384 [2024-11-20 05:28:18.204617] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.384 [2024-11-20 05:28:18.204664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:46.384 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.384 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:46.384 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.384 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.384 [2024-11-20 05:28:18.212636] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.384 [2024-11-20 05:28:18.212688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.384 [2024-11-20 05:28:18.212696] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.384 [2024-11-20 05:28:18.212705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.384 [2024-11-20 05:28:18.212711] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.384 [2024-11-20 05:28:18.212720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.642 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.643 [2024-11-20 05:28:18.242481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.643 BaseBdev1 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.643 [ 00:17:46.643 { 00:17:46.643 "name": "BaseBdev1", 00:17:46.643 "aliases": [ 00:17:46.643 "fb3bdb3a-2eba-4a93-8587-75ebbec6ee3f" 00:17:46.643 ], 00:17:46.643 "product_name": "Malloc disk", 00:17:46.643 "block_size": 512, 00:17:46.643 "num_blocks": 65536, 00:17:46.643 "uuid": "fb3bdb3a-2eba-4a93-8587-75ebbec6ee3f", 00:17:46.643 "assigned_rate_limits": { 00:17:46.643 "rw_ios_per_sec": 0, 00:17:46.643 "rw_mbytes_per_sec": 0, 00:17:46.643 "r_mbytes_per_sec": 0, 00:17:46.643 "w_mbytes_per_sec": 0 00:17:46.643 }, 00:17:46.643 "claimed": true, 00:17:46.643 "claim_type": "exclusive_write", 00:17:46.643 "zoned": false, 00:17:46.643 "supported_io_types": { 00:17:46.643 "read": true, 00:17:46.643 "write": true, 00:17:46.643 "unmap": true, 00:17:46.643 "flush": true, 00:17:46.643 "reset": true, 00:17:46.643 "nvme_admin": false, 00:17:46.643 "nvme_io": false, 00:17:46.643 "nvme_io_md": false, 00:17:46.643 "write_zeroes": true, 00:17:46.643 "zcopy": true, 00:17:46.643 "get_zone_info": false, 00:17:46.643 "zone_management": false, 00:17:46.643 "zone_append": false, 00:17:46.643 "compare": false, 00:17:46.643 "compare_and_write": false, 00:17:46.643 "abort": true, 00:17:46.643 "seek_hole": false, 00:17:46.643 "seek_data": false, 00:17:46.643 "copy": true, 00:17:46.643 "nvme_iov_md": false 00:17:46.643 }, 00:17:46.643 "memory_domains": [ 00:17:46.643 { 00:17:46.643 "dma_device_id": "system", 00:17:46.643 "dma_device_type": 1 00:17:46.643 }, 00:17:46.643 { 00:17:46.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.643 "dma_device_type": 2 00:17:46.643 } 00:17:46.643 ], 00:17:46.643 "driver_specific": {} 00:17:46.643 } 00:17:46.643 ] 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.643 "name": "Existed_Raid", 00:17:46.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.643 "strip_size_kb": 64, 00:17:46.643 "state": "configuring", 00:17:46.643 "raid_level": "concat", 00:17:46.643 "superblock": false, 00:17:46.643 "num_base_bdevs": 3, 00:17:46.643 "num_base_bdevs_discovered": 1, 00:17:46.643 "num_base_bdevs_operational": 3, 00:17:46.643 "base_bdevs_list": [ 00:17:46.643 { 00:17:46.643 "name": "BaseBdev1", 00:17:46.643 "uuid": "fb3bdb3a-2eba-4a93-8587-75ebbec6ee3f", 00:17:46.643 "is_configured": true, 00:17:46.643 "data_offset": 0, 00:17:46.643 "data_size": 65536 00:17:46.643 }, 00:17:46.643 { 00:17:46.643 "name": "BaseBdev2", 00:17:46.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.643 "is_configured": false, 00:17:46.643 "data_offset": 0, 00:17:46.643 "data_size": 0 00:17:46.643 }, 00:17:46.643 { 00:17:46.643 "name": "BaseBdev3", 00:17:46.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.643 "is_configured": false, 00:17:46.643 "data_offset": 0, 00:17:46.643 "data_size": 0 00:17:46.643 } 00:17:46.643 ] 00:17:46.643 }' 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.643 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.902 [2024-11-20 05:28:18.586614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.902 [2024-11-20 05:28:18.586674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.902 [2024-11-20 05:28:18.594663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.902 [2024-11-20 05:28:18.596418] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.902 [2024-11-20 05:28:18.596457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.902 [2024-11-20 05:28:18.596465] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.902 [2024-11-20 05:28:18.596472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.902 "name": "Existed_Raid", 00:17:46.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.902 "strip_size_kb": 64, 00:17:46.902 "state": "configuring", 00:17:46.902 "raid_level": "concat", 00:17:46.902 "superblock": false, 00:17:46.902 "num_base_bdevs": 3, 00:17:46.902 "num_base_bdevs_discovered": 1, 00:17:46.902 "num_base_bdevs_operational": 3, 00:17:46.902 "base_bdevs_list": [ 00:17:46.902 { 00:17:46.902 "name": "BaseBdev1", 00:17:46.902 "uuid": "fb3bdb3a-2eba-4a93-8587-75ebbec6ee3f", 00:17:46.902 "is_configured": true, 00:17:46.902 "data_offset": 0, 00:17:46.902 "data_size": 65536 00:17:46.902 }, 00:17:46.902 { 00:17:46.902 "name": "BaseBdev2", 00:17:46.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.902 "is_configured": false, 00:17:46.902 "data_offset": 0, 00:17:46.902 "data_size": 0 00:17:46.902 }, 00:17:46.902 { 00:17:46.902 "name": "BaseBdev3", 00:17:46.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.902 "is_configured": false, 00:17:46.902 "data_offset": 0, 00:17:46.902 "data_size": 0 00:17:46.902 } 00:17:46.902 ] 00:17:46.902 }' 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.902 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.160 [2024-11-20 05:28:18.979301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.160 BaseBdev2 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.160 05:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.418 [ 00:17:47.418 { 00:17:47.418 "name": "BaseBdev2", 00:17:47.418 "aliases": [ 00:17:47.418 "6f03c697-570c-4a45-bde1-9312456033b8" 00:17:47.418 ], 00:17:47.418 "product_name": "Malloc disk", 00:17:47.418 "block_size": 512, 00:17:47.418 "num_blocks": 65536, 00:17:47.418 "uuid": "6f03c697-570c-4a45-bde1-9312456033b8", 00:17:47.418 "assigned_rate_limits": { 00:17:47.418 "rw_ios_per_sec": 0, 00:17:47.418 "rw_mbytes_per_sec": 0, 00:17:47.418 "r_mbytes_per_sec": 0, 00:17:47.418 "w_mbytes_per_sec": 0 00:17:47.418 }, 00:17:47.418 "claimed": true, 00:17:47.418 "claim_type": "exclusive_write", 00:17:47.418 "zoned": false, 00:17:47.418 "supported_io_types": { 00:17:47.418 "read": true, 00:17:47.418 "write": true, 00:17:47.418 "unmap": true, 00:17:47.418 "flush": true, 00:17:47.418 "reset": true, 00:17:47.418 "nvme_admin": false, 00:17:47.418 "nvme_io": false, 00:17:47.418 "nvme_io_md": false, 00:17:47.418 "write_zeroes": true, 00:17:47.418 "zcopy": true, 00:17:47.418 "get_zone_info": false, 00:17:47.418 "zone_management": false, 00:17:47.418 "zone_append": false, 00:17:47.418 "compare": false, 00:17:47.418 "compare_and_write": false, 00:17:47.418 "abort": true, 00:17:47.418 "seek_hole": false, 00:17:47.418 "seek_data": false, 00:17:47.418 "copy": true, 00:17:47.418 "nvme_iov_md": false 00:17:47.419 }, 00:17:47.419 "memory_domains": [ 00:17:47.419 { 00:17:47.419 "dma_device_id": "system", 00:17:47.419 "dma_device_type": 1 00:17:47.419 }, 00:17:47.419 { 00:17:47.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.419 "dma_device_type": 2 00:17:47.419 } 00:17:47.419 ], 00:17:47.419 "driver_specific": {} 00:17:47.419 } 00:17:47.419 ] 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.419 "name": "Existed_Raid", 00:17:47.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.419 "strip_size_kb": 64, 00:17:47.419 "state": "configuring", 00:17:47.419 "raid_level": "concat", 00:17:47.419 "superblock": false, 00:17:47.419 "num_base_bdevs": 3, 00:17:47.419 "num_base_bdevs_discovered": 2, 00:17:47.419 "num_base_bdevs_operational": 3, 00:17:47.419 "base_bdevs_list": [ 00:17:47.419 { 00:17:47.419 "name": "BaseBdev1", 00:17:47.419 "uuid": "fb3bdb3a-2eba-4a93-8587-75ebbec6ee3f", 00:17:47.419 "is_configured": true, 00:17:47.419 "data_offset": 0, 00:17:47.419 "data_size": 65536 00:17:47.419 }, 00:17:47.419 { 00:17:47.419 "name": "BaseBdev2", 00:17:47.419 "uuid": "6f03c697-570c-4a45-bde1-9312456033b8", 00:17:47.419 "is_configured": true, 00:17:47.419 "data_offset": 0, 00:17:47.419 "data_size": 65536 00:17:47.419 }, 00:17:47.419 { 00:17:47.419 "name": "BaseBdev3", 00:17:47.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.419 "is_configured": false, 00:17:47.419 "data_offset": 0, 00:17:47.419 "data_size": 0 00:17:47.419 } 00:17:47.419 ] 00:17:47.419 }' 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.419 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.677 [2024-11-20 05:28:19.389000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.677 [2024-11-20 05:28:19.389243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:47.677 [2024-11-20 05:28:19.389264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:47.677 [2024-11-20 05:28:19.389547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:47.677 [2024-11-20 05:28:19.389697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:47.677 [2024-11-20 05:28:19.389705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:47.677 [2024-11-20 05:28:19.389939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.677 BaseBdev3 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.677 [ 00:17:47.677 { 00:17:47.677 "name": "BaseBdev3", 00:17:47.677 "aliases": [ 00:17:47.677 "1fa13902-2795-4c99-8993-19a2b0d8c3a6" 00:17:47.677 ], 00:17:47.677 "product_name": "Malloc disk", 00:17:47.677 "block_size": 512, 00:17:47.677 "num_blocks": 65536, 00:17:47.677 "uuid": "1fa13902-2795-4c99-8993-19a2b0d8c3a6", 00:17:47.677 "assigned_rate_limits": { 00:17:47.677 "rw_ios_per_sec": 0, 00:17:47.677 "rw_mbytes_per_sec": 0, 00:17:47.677 "r_mbytes_per_sec": 0, 00:17:47.677 "w_mbytes_per_sec": 0 00:17:47.677 }, 00:17:47.677 "claimed": true, 00:17:47.677 "claim_type": "exclusive_write", 00:17:47.677 "zoned": false, 00:17:47.677 "supported_io_types": { 00:17:47.677 "read": true, 00:17:47.677 "write": true, 00:17:47.677 "unmap": true, 00:17:47.677 "flush": true, 00:17:47.677 "reset": true, 00:17:47.677 "nvme_admin": false, 00:17:47.677 "nvme_io": false, 00:17:47.677 "nvme_io_md": false, 00:17:47.677 "write_zeroes": true, 00:17:47.677 "zcopy": true, 00:17:47.677 "get_zone_info": false, 00:17:47.677 "zone_management": false, 00:17:47.677 "zone_append": false, 00:17:47.677 "compare": false, 00:17:47.677 "compare_and_write": false, 00:17:47.677 "abort": true, 00:17:47.677 "seek_hole": false, 00:17:47.677 "seek_data": false, 00:17:47.677 "copy": true, 00:17:47.677 "nvme_iov_md": false 00:17:47.677 }, 00:17:47.677 "memory_domains": [ 00:17:47.677 { 00:17:47.677 "dma_device_id": "system", 00:17:47.677 "dma_device_type": 1 00:17:47.677 }, 00:17:47.677 { 00:17:47.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.677 "dma_device_type": 2 00:17:47.677 } 00:17:47.677 ], 00:17:47.677 "driver_specific": {} 00:17:47.677 } 00:17:47.677 ] 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.677 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.677 "name": "Existed_Raid", 00:17:47.678 "uuid": "5a4b1e10-aaac-4c5e-b6d2-77226d7bfae3", 00:17:47.678 "strip_size_kb": 64, 00:17:47.678 "state": "online", 00:17:47.678 "raid_level": "concat", 00:17:47.678 "superblock": false, 00:17:47.678 "num_base_bdevs": 3, 00:17:47.678 "num_base_bdevs_discovered": 3, 00:17:47.678 "num_base_bdevs_operational": 3, 00:17:47.678 "base_bdevs_list": [ 00:17:47.678 { 00:17:47.678 "name": "BaseBdev1", 00:17:47.678 "uuid": "fb3bdb3a-2eba-4a93-8587-75ebbec6ee3f", 00:17:47.678 "is_configured": true, 00:17:47.678 "data_offset": 0, 00:17:47.678 "data_size": 65536 00:17:47.678 }, 00:17:47.678 { 00:17:47.678 "name": "BaseBdev2", 00:17:47.678 "uuid": "6f03c697-570c-4a45-bde1-9312456033b8", 00:17:47.678 "is_configured": true, 00:17:47.678 "data_offset": 0, 00:17:47.678 "data_size": 65536 00:17:47.678 }, 00:17:47.678 { 00:17:47.678 "name": "BaseBdev3", 00:17:47.678 "uuid": "1fa13902-2795-4c99-8993-19a2b0d8c3a6", 00:17:47.678 "is_configured": true, 00:17:47.678 "data_offset": 0, 00:17:47.678 "data_size": 65536 00:17:47.678 } 00:17:47.678 ] 00:17:47.678 }' 00:17:47.678 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.678 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.935 [2024-11-20 05:28:19.733416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.935 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:47.935 "name": "Existed_Raid", 00:17:47.935 "aliases": [ 00:17:47.935 "5a4b1e10-aaac-4c5e-b6d2-77226d7bfae3" 00:17:47.935 ], 00:17:47.935 "product_name": "Raid Volume", 00:17:47.935 "block_size": 512, 00:17:47.935 "num_blocks": 196608, 00:17:47.935 "uuid": "5a4b1e10-aaac-4c5e-b6d2-77226d7bfae3", 00:17:47.935 "assigned_rate_limits": { 00:17:47.935 "rw_ios_per_sec": 0, 00:17:47.935 "rw_mbytes_per_sec": 0, 00:17:47.935 "r_mbytes_per_sec": 0, 00:17:47.935 "w_mbytes_per_sec": 0 00:17:47.935 }, 00:17:47.935 "claimed": false, 00:17:47.935 "zoned": false, 00:17:47.935 "supported_io_types": { 00:17:47.935 "read": true, 00:17:47.935 "write": true, 00:17:47.935 "unmap": true, 00:17:47.935 "flush": true, 00:17:47.935 "reset": true, 00:17:47.935 "nvme_admin": false, 00:17:47.935 "nvme_io": false, 00:17:47.935 "nvme_io_md": false, 00:17:47.935 "write_zeroes": true, 00:17:47.935 "zcopy": false, 00:17:47.935 "get_zone_info": false, 00:17:47.936 "zone_management": false, 00:17:47.936 "zone_append": false, 00:17:47.936 "compare": false, 00:17:47.936 "compare_and_write": false, 00:17:47.936 "abort": false, 00:17:47.936 "seek_hole": false, 00:17:47.936 "seek_data": false, 00:17:47.936 "copy": false, 00:17:47.936 "nvme_iov_md": false 00:17:47.936 }, 00:17:47.936 "memory_domains": [ 00:17:47.936 { 00:17:47.936 "dma_device_id": "system", 00:17:47.936 "dma_device_type": 1 00:17:47.936 }, 00:17:47.936 { 00:17:47.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.936 "dma_device_type": 2 00:17:47.936 }, 00:17:47.936 { 00:17:47.936 "dma_device_id": "system", 00:17:47.936 "dma_device_type": 1 00:17:47.936 }, 00:17:47.936 { 00:17:47.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.936 "dma_device_type": 2 00:17:47.936 }, 00:17:47.936 { 00:17:47.936 "dma_device_id": "system", 00:17:47.936 "dma_device_type": 1 00:17:47.936 }, 00:17:47.936 { 00:17:47.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.936 "dma_device_type": 2 00:17:47.936 } 00:17:47.936 ], 00:17:47.936 "driver_specific": { 00:17:47.936 "raid": { 00:17:47.936 "uuid": "5a4b1e10-aaac-4c5e-b6d2-77226d7bfae3", 00:17:47.936 "strip_size_kb": 64, 00:17:47.936 "state": "online", 00:17:47.936 "raid_level": "concat", 00:17:47.936 "superblock": false, 00:17:47.936 "num_base_bdevs": 3, 00:17:47.936 "num_base_bdevs_discovered": 3, 00:17:47.936 "num_base_bdevs_operational": 3, 00:17:47.936 "base_bdevs_list": [ 00:17:47.936 { 00:17:47.936 "name": "BaseBdev1", 00:17:47.936 "uuid": "fb3bdb3a-2eba-4a93-8587-75ebbec6ee3f", 00:17:47.936 "is_configured": true, 00:17:47.936 "data_offset": 0, 00:17:47.936 "data_size": 65536 00:17:47.936 }, 00:17:47.936 { 00:17:47.936 "name": "BaseBdev2", 00:17:47.936 "uuid": "6f03c697-570c-4a45-bde1-9312456033b8", 00:17:47.936 "is_configured": true, 00:17:47.936 "data_offset": 0, 00:17:47.936 "data_size": 65536 00:17:47.936 }, 00:17:47.936 { 00:17:47.936 "name": "BaseBdev3", 00:17:47.936 "uuid": "1fa13902-2795-4c99-8993-19a2b0d8c3a6", 00:17:47.936 "is_configured": true, 00:17:47.936 "data_offset": 0, 00:17:47.936 "data_size": 65536 00:17:47.936 } 00:17:47.936 ] 00:17:47.936 } 00:17:47.936 } 00:17:47.936 }' 00:17:47.936 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:48.194 BaseBdev2 00:17:48.194 BaseBdev3' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.194 [2024-11-20 05:28:19.913210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.194 [2024-11-20 05:28:19.913254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.194 [2024-11-20 05:28:19.913306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.194 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.195 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.195 05:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.195 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.195 "name": "Existed_Raid", 00:17:48.195 "uuid": "5a4b1e10-aaac-4c5e-b6d2-77226d7bfae3", 00:17:48.195 "strip_size_kb": 64, 00:17:48.195 "state": "offline", 00:17:48.195 "raid_level": "concat", 00:17:48.195 "superblock": false, 00:17:48.195 "num_base_bdevs": 3, 00:17:48.195 "num_base_bdevs_discovered": 2, 00:17:48.195 "num_base_bdevs_operational": 2, 00:17:48.195 "base_bdevs_list": [ 00:17:48.195 { 00:17:48.195 "name": null, 00:17:48.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.195 "is_configured": false, 00:17:48.195 "data_offset": 0, 00:17:48.195 "data_size": 65536 00:17:48.195 }, 00:17:48.195 { 00:17:48.195 "name": "BaseBdev2", 00:17:48.195 "uuid": "6f03c697-570c-4a45-bde1-9312456033b8", 00:17:48.195 "is_configured": true, 00:17:48.195 "data_offset": 0, 00:17:48.195 "data_size": 65536 00:17:48.195 }, 00:17:48.195 { 00:17:48.195 "name": "BaseBdev3", 00:17:48.195 "uuid": "1fa13902-2795-4c99-8993-19a2b0d8c3a6", 00:17:48.195 "is_configured": true, 00:17:48.195 "data_offset": 0, 00:17:48.195 "data_size": 65536 00:17:48.195 } 00:17:48.195 ] 00:17:48.195 }' 00:17:48.195 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.195 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.456 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:48.456 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.714 [2024-11-20 05:28:20.319792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.714 [2024-11-20 05:28:20.408648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:48.714 [2024-11-20 05:28:20.408705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:48.714 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.715 BaseBdev2 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.715 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 [ 00:17:48.973 { 00:17:48.973 "name": "BaseBdev2", 00:17:48.973 "aliases": [ 00:17:48.973 "6575c3c4-7697-407d-be36-b2404f616959" 00:17:48.973 ], 00:17:48.973 "product_name": "Malloc disk", 00:17:48.973 "block_size": 512, 00:17:48.973 "num_blocks": 65536, 00:17:48.973 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:48.973 "assigned_rate_limits": { 00:17:48.973 "rw_ios_per_sec": 0, 00:17:48.973 "rw_mbytes_per_sec": 0, 00:17:48.973 "r_mbytes_per_sec": 0, 00:17:48.973 "w_mbytes_per_sec": 0 00:17:48.973 }, 00:17:48.973 "claimed": false, 00:17:48.973 "zoned": false, 00:17:48.973 "supported_io_types": { 00:17:48.973 "read": true, 00:17:48.973 "write": true, 00:17:48.973 "unmap": true, 00:17:48.973 "flush": true, 00:17:48.973 "reset": true, 00:17:48.973 "nvme_admin": false, 00:17:48.973 "nvme_io": false, 00:17:48.973 "nvme_io_md": false, 00:17:48.973 "write_zeroes": true, 00:17:48.973 "zcopy": true, 00:17:48.973 "get_zone_info": false, 00:17:48.973 "zone_management": false, 00:17:48.973 "zone_append": false, 00:17:48.973 "compare": false, 00:17:48.973 "compare_and_write": false, 00:17:48.973 "abort": true, 00:17:48.973 "seek_hole": false, 00:17:48.973 "seek_data": false, 00:17:48.973 "copy": true, 00:17:48.973 "nvme_iov_md": false 00:17:48.973 }, 00:17:48.973 "memory_domains": [ 00:17:48.973 { 00:17:48.973 "dma_device_id": "system", 00:17:48.973 "dma_device_type": 1 00:17:48.973 }, 00:17:48.973 { 00:17:48.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.973 "dma_device_type": 2 00:17:48.973 } 00:17:48.973 ], 00:17:48.973 "driver_specific": {} 00:17:48.973 } 00:17:48.973 ] 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 BaseBdev3 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 [ 00:17:48.973 { 00:17:48.973 "name": "BaseBdev3", 00:17:48.973 "aliases": [ 00:17:48.973 "ae19f8ab-1ebc-4eda-8308-0ba180623b53" 00:17:48.973 ], 00:17:48.973 "product_name": "Malloc disk", 00:17:48.973 "block_size": 512, 00:17:48.973 "num_blocks": 65536, 00:17:48.973 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:48.973 "assigned_rate_limits": { 00:17:48.973 "rw_ios_per_sec": 0, 00:17:48.973 "rw_mbytes_per_sec": 0, 00:17:48.973 "r_mbytes_per_sec": 0, 00:17:48.973 "w_mbytes_per_sec": 0 00:17:48.973 }, 00:17:48.973 "claimed": false, 00:17:48.973 "zoned": false, 00:17:48.973 "supported_io_types": { 00:17:48.973 "read": true, 00:17:48.973 "write": true, 00:17:48.973 "unmap": true, 00:17:48.973 "flush": true, 00:17:48.973 "reset": true, 00:17:48.973 "nvme_admin": false, 00:17:48.973 "nvme_io": false, 00:17:48.973 "nvme_io_md": false, 00:17:48.973 "write_zeroes": true, 00:17:48.973 "zcopy": true, 00:17:48.973 "get_zone_info": false, 00:17:48.973 "zone_management": false, 00:17:48.973 "zone_append": false, 00:17:48.973 "compare": false, 00:17:48.973 "compare_and_write": false, 00:17:48.973 "abort": true, 00:17:48.973 "seek_hole": false, 00:17:48.973 "seek_data": false, 00:17:48.973 "copy": true, 00:17:48.973 "nvme_iov_md": false 00:17:48.973 }, 00:17:48.973 "memory_domains": [ 00:17:48.973 { 00:17:48.973 "dma_device_id": "system", 00:17:48.973 "dma_device_type": 1 00:17:48.973 }, 00:17:48.973 { 00:17:48.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.973 "dma_device_type": 2 00:17:48.973 } 00:17:48.973 ], 00:17:48.973 "driver_specific": {} 00:17:48.973 } 00:17:48.973 ] 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 [2024-11-20 05:28:20.608588] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.973 [2024-11-20 05:28:20.608777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.973 [2024-11-20 05:28:20.608839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.973 [2024-11-20 05:28:20.610526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.974 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.974 "name": "Existed_Raid", 00:17:48.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.974 "strip_size_kb": 64, 00:17:48.974 "state": "configuring", 00:17:48.974 "raid_level": "concat", 00:17:48.974 "superblock": false, 00:17:48.974 "num_base_bdevs": 3, 00:17:48.974 "num_base_bdevs_discovered": 2, 00:17:48.974 "num_base_bdevs_operational": 3, 00:17:48.974 "base_bdevs_list": [ 00:17:48.974 { 00:17:48.974 "name": "BaseBdev1", 00:17:48.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.974 "is_configured": false, 00:17:48.974 "data_offset": 0, 00:17:48.974 "data_size": 0 00:17:48.974 }, 00:17:48.974 { 00:17:48.974 "name": "BaseBdev2", 00:17:48.974 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:48.974 "is_configured": true, 00:17:48.974 "data_offset": 0, 00:17:48.974 "data_size": 65536 00:17:48.974 }, 00:17:48.974 { 00:17:48.974 "name": "BaseBdev3", 00:17:48.974 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:48.974 "is_configured": true, 00:17:48.974 "data_offset": 0, 00:17:48.974 "data_size": 65536 00:17:48.974 } 00:17:48.974 ] 00:17:48.974 }' 00:17:48.974 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.974 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.233 [2024-11-20 05:28:20.924678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.233 "name": "Existed_Raid", 00:17:49.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.233 "strip_size_kb": 64, 00:17:49.233 "state": "configuring", 00:17:49.233 "raid_level": "concat", 00:17:49.233 "superblock": false, 00:17:49.233 "num_base_bdevs": 3, 00:17:49.233 "num_base_bdevs_discovered": 1, 00:17:49.233 "num_base_bdevs_operational": 3, 00:17:49.233 "base_bdevs_list": [ 00:17:49.233 { 00:17:49.233 "name": "BaseBdev1", 00:17:49.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.233 "is_configured": false, 00:17:49.233 "data_offset": 0, 00:17:49.233 "data_size": 0 00:17:49.233 }, 00:17:49.233 { 00:17:49.233 "name": null, 00:17:49.233 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:49.233 "is_configured": false, 00:17:49.233 "data_offset": 0, 00:17:49.233 "data_size": 65536 00:17:49.233 }, 00:17:49.233 { 00:17:49.233 "name": "BaseBdev3", 00:17:49.233 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:49.233 "is_configured": true, 00:17:49.233 "data_offset": 0, 00:17:49.233 "data_size": 65536 00:17:49.233 } 00:17:49.233 ] 00:17:49.233 }' 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.233 05:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.491 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.491 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:49.491 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.491 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.491 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.491 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:49.491 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.491 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.492 [2024-11-20 05:28:21.293735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.492 BaseBdev1 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.492 [ 00:17:49.492 { 00:17:49.492 "name": "BaseBdev1", 00:17:49.492 "aliases": [ 00:17:49.492 "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94" 00:17:49.492 ], 00:17:49.492 "product_name": "Malloc disk", 00:17:49.492 "block_size": 512, 00:17:49.492 "num_blocks": 65536, 00:17:49.492 "uuid": "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94", 00:17:49.492 "assigned_rate_limits": { 00:17:49.492 "rw_ios_per_sec": 0, 00:17:49.492 "rw_mbytes_per_sec": 0, 00:17:49.492 "r_mbytes_per_sec": 0, 00:17:49.492 "w_mbytes_per_sec": 0 00:17:49.492 }, 00:17:49.492 "claimed": true, 00:17:49.492 "claim_type": "exclusive_write", 00:17:49.492 "zoned": false, 00:17:49.492 "supported_io_types": { 00:17:49.492 "read": true, 00:17:49.492 "write": true, 00:17:49.492 "unmap": true, 00:17:49.492 "flush": true, 00:17:49.492 "reset": true, 00:17:49.492 "nvme_admin": false, 00:17:49.492 "nvme_io": false, 00:17:49.492 "nvme_io_md": false, 00:17:49.492 "write_zeroes": true, 00:17:49.492 "zcopy": true, 00:17:49.492 "get_zone_info": false, 00:17:49.492 "zone_management": false, 00:17:49.492 "zone_append": false, 00:17:49.492 "compare": false, 00:17:49.492 "compare_and_write": false, 00:17:49.492 "abort": true, 00:17:49.492 "seek_hole": false, 00:17:49.492 "seek_data": false, 00:17:49.492 "copy": true, 00:17:49.492 "nvme_iov_md": false 00:17:49.492 }, 00:17:49.492 "memory_domains": [ 00:17:49.492 { 00:17:49.492 "dma_device_id": "system", 00:17:49.492 "dma_device_type": 1 00:17:49.492 }, 00:17:49.492 { 00:17:49.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.492 "dma_device_type": 2 00:17:49.492 } 00:17:49.492 ], 00:17:49.492 "driver_specific": {} 00:17:49.492 } 00:17:49.492 ] 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.492 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.749 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.749 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.749 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.749 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.749 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.749 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.749 "name": "Existed_Raid", 00:17:49.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.749 "strip_size_kb": 64, 00:17:49.749 "state": "configuring", 00:17:49.749 "raid_level": "concat", 00:17:49.749 "superblock": false, 00:17:49.749 "num_base_bdevs": 3, 00:17:49.749 "num_base_bdevs_discovered": 2, 00:17:49.749 "num_base_bdevs_operational": 3, 00:17:49.749 "base_bdevs_list": [ 00:17:49.749 { 00:17:49.749 "name": "BaseBdev1", 00:17:49.749 "uuid": "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94", 00:17:49.749 "is_configured": true, 00:17:49.749 "data_offset": 0, 00:17:49.749 "data_size": 65536 00:17:49.749 }, 00:17:49.749 { 00:17:49.749 "name": null, 00:17:49.749 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:49.749 "is_configured": false, 00:17:49.749 "data_offset": 0, 00:17:49.749 "data_size": 65536 00:17:49.749 }, 00:17:49.749 { 00:17:49.750 "name": "BaseBdev3", 00:17:49.750 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:49.750 "is_configured": true, 00:17:49.750 "data_offset": 0, 00:17:49.750 "data_size": 65536 00:17:49.750 } 00:17:49.750 ] 00:17:49.750 }' 00:17:49.750 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.750 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.007 [2024-11-20 05:28:21.677854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.007 "name": "Existed_Raid", 00:17:50.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.007 "strip_size_kb": 64, 00:17:50.007 "state": "configuring", 00:17:50.007 "raid_level": "concat", 00:17:50.007 "superblock": false, 00:17:50.007 "num_base_bdevs": 3, 00:17:50.007 "num_base_bdevs_discovered": 1, 00:17:50.007 "num_base_bdevs_operational": 3, 00:17:50.007 "base_bdevs_list": [ 00:17:50.007 { 00:17:50.007 "name": "BaseBdev1", 00:17:50.007 "uuid": "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94", 00:17:50.007 "is_configured": true, 00:17:50.007 "data_offset": 0, 00:17:50.007 "data_size": 65536 00:17:50.007 }, 00:17:50.007 { 00:17:50.007 "name": null, 00:17:50.007 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:50.007 "is_configured": false, 00:17:50.007 "data_offset": 0, 00:17:50.007 "data_size": 65536 00:17:50.007 }, 00:17:50.007 { 00:17:50.007 "name": null, 00:17:50.007 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:50.007 "is_configured": false, 00:17:50.007 "data_offset": 0, 00:17:50.007 "data_size": 65536 00:17:50.007 } 00:17:50.007 ] 00:17:50.007 }' 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.007 05:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.266 [2024-11-20 05:28:22.041979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.266 "name": "Existed_Raid", 00:17:50.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.266 "strip_size_kb": 64, 00:17:50.266 "state": "configuring", 00:17:50.266 "raid_level": "concat", 00:17:50.266 "superblock": false, 00:17:50.266 "num_base_bdevs": 3, 00:17:50.266 "num_base_bdevs_discovered": 2, 00:17:50.266 "num_base_bdevs_operational": 3, 00:17:50.266 "base_bdevs_list": [ 00:17:50.266 { 00:17:50.266 "name": "BaseBdev1", 00:17:50.266 "uuid": "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94", 00:17:50.266 "is_configured": true, 00:17:50.266 "data_offset": 0, 00:17:50.266 "data_size": 65536 00:17:50.266 }, 00:17:50.266 { 00:17:50.266 "name": null, 00:17:50.266 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:50.266 "is_configured": false, 00:17:50.266 "data_offset": 0, 00:17:50.266 "data_size": 65536 00:17:50.266 }, 00:17:50.266 { 00:17:50.266 "name": "BaseBdev3", 00:17:50.266 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:50.266 "is_configured": true, 00:17:50.266 "data_offset": 0, 00:17:50.266 "data_size": 65536 00:17:50.266 } 00:17:50.266 ] 00:17:50.266 }' 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.266 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.832 [2024-11-20 05:28:22.402039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.832 "name": "Existed_Raid", 00:17:50.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.832 "strip_size_kb": 64, 00:17:50.832 "state": "configuring", 00:17:50.832 "raid_level": "concat", 00:17:50.832 "superblock": false, 00:17:50.832 "num_base_bdevs": 3, 00:17:50.832 "num_base_bdevs_discovered": 1, 00:17:50.832 "num_base_bdevs_operational": 3, 00:17:50.832 "base_bdevs_list": [ 00:17:50.832 { 00:17:50.832 "name": null, 00:17:50.832 "uuid": "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94", 00:17:50.832 "is_configured": false, 00:17:50.832 "data_offset": 0, 00:17:50.832 "data_size": 65536 00:17:50.832 }, 00:17:50.832 { 00:17:50.832 "name": null, 00:17:50.832 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:50.832 "is_configured": false, 00:17:50.832 "data_offset": 0, 00:17:50.832 "data_size": 65536 00:17:50.832 }, 00:17:50.832 { 00:17:50.832 "name": "BaseBdev3", 00:17:50.832 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:50.832 "is_configured": true, 00:17:50.832 "data_offset": 0, 00:17:50.832 "data_size": 65536 00:17:50.832 } 00:17:50.832 ] 00:17:50.832 }' 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.832 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.091 [2024-11-20 05:28:22.779924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.091 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.091 "name": "Existed_Raid", 00:17:51.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.091 "strip_size_kb": 64, 00:17:51.091 "state": "configuring", 00:17:51.091 "raid_level": "concat", 00:17:51.091 "superblock": false, 00:17:51.091 "num_base_bdevs": 3, 00:17:51.091 "num_base_bdevs_discovered": 2, 00:17:51.091 "num_base_bdevs_operational": 3, 00:17:51.091 "base_bdevs_list": [ 00:17:51.091 { 00:17:51.091 "name": null, 00:17:51.091 "uuid": "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94", 00:17:51.091 "is_configured": false, 00:17:51.091 "data_offset": 0, 00:17:51.091 "data_size": 65536 00:17:51.091 }, 00:17:51.091 { 00:17:51.091 "name": "BaseBdev2", 00:17:51.092 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:51.092 "is_configured": true, 00:17:51.092 "data_offset": 0, 00:17:51.092 "data_size": 65536 00:17:51.092 }, 00:17:51.092 { 00:17:51.092 "name": "BaseBdev3", 00:17:51.092 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:51.092 "is_configured": true, 00:17:51.092 "data_offset": 0, 00:17:51.092 "data_size": 65536 00:17:51.092 } 00:17:51.092 ] 00:17:51.092 }' 00:17:51.092 05:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.092 05:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dd0c970f-b874-4fcb-ac52-6a5ac9a61a94 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.350 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.350 [2024-11-20 05:28:23.180353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:51.350 [2024-11-20 05:28:23.180404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:51.350 [2024-11-20 05:28:23.180412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:51.350 [2024-11-20 05:28:23.180624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:51.350 [2024-11-20 05:28:23.180733] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:51.350 [2024-11-20 05:28:23.180739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:51.350 [2024-11-20 05:28:23.180935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.350 NewBaseBdev 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 [ 00:17:51.608 { 00:17:51.608 "name": "NewBaseBdev", 00:17:51.608 "aliases": [ 00:17:51.608 "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94" 00:17:51.608 ], 00:17:51.608 "product_name": "Malloc disk", 00:17:51.608 "block_size": 512, 00:17:51.608 "num_blocks": 65536, 00:17:51.608 "uuid": "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94", 00:17:51.608 "assigned_rate_limits": { 00:17:51.608 "rw_ios_per_sec": 0, 00:17:51.608 "rw_mbytes_per_sec": 0, 00:17:51.608 "r_mbytes_per_sec": 0, 00:17:51.608 "w_mbytes_per_sec": 0 00:17:51.608 }, 00:17:51.608 "claimed": true, 00:17:51.608 "claim_type": "exclusive_write", 00:17:51.608 "zoned": false, 00:17:51.608 "supported_io_types": { 00:17:51.608 "read": true, 00:17:51.608 "write": true, 00:17:51.608 "unmap": true, 00:17:51.608 "flush": true, 00:17:51.608 "reset": true, 00:17:51.608 "nvme_admin": false, 00:17:51.608 "nvme_io": false, 00:17:51.608 "nvme_io_md": false, 00:17:51.608 "write_zeroes": true, 00:17:51.608 "zcopy": true, 00:17:51.608 "get_zone_info": false, 00:17:51.608 "zone_management": false, 00:17:51.608 "zone_append": false, 00:17:51.608 "compare": false, 00:17:51.608 "compare_and_write": false, 00:17:51.608 "abort": true, 00:17:51.608 "seek_hole": false, 00:17:51.608 "seek_data": false, 00:17:51.608 "copy": true, 00:17:51.608 "nvme_iov_md": false 00:17:51.608 }, 00:17:51.608 "memory_domains": [ 00:17:51.608 { 00:17:51.608 "dma_device_id": "system", 00:17:51.608 "dma_device_type": 1 00:17:51.608 }, 00:17:51.608 { 00:17:51.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.608 "dma_device_type": 2 00:17:51.608 } 00:17:51.608 ], 00:17:51.608 "driver_specific": {} 00:17:51.608 } 00:17:51.608 ] 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.608 "name": "Existed_Raid", 00:17:51.608 "uuid": "21691580-8a9b-4364-9d8d-e0239c01d385", 00:17:51.608 "strip_size_kb": 64, 00:17:51.608 "state": "online", 00:17:51.608 "raid_level": "concat", 00:17:51.608 "superblock": false, 00:17:51.608 "num_base_bdevs": 3, 00:17:51.608 "num_base_bdevs_discovered": 3, 00:17:51.608 "num_base_bdevs_operational": 3, 00:17:51.608 "base_bdevs_list": [ 00:17:51.608 { 00:17:51.608 "name": "NewBaseBdev", 00:17:51.608 "uuid": "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94", 00:17:51.608 "is_configured": true, 00:17:51.608 "data_offset": 0, 00:17:51.608 "data_size": 65536 00:17:51.608 }, 00:17:51.608 { 00:17:51.608 "name": "BaseBdev2", 00:17:51.608 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:51.608 "is_configured": true, 00:17:51.608 "data_offset": 0, 00:17:51.608 "data_size": 65536 00:17:51.608 }, 00:17:51.608 { 00:17:51.608 "name": "BaseBdev3", 00:17:51.608 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:51.608 "is_configured": true, 00:17:51.608 "data_offset": 0, 00:17:51.608 "data_size": 65536 00:17:51.608 } 00:17:51.608 ] 00:17:51.608 }' 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.608 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:51.866 [2024-11-20 05:28:23.560783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:51.866 "name": "Existed_Raid", 00:17:51.866 "aliases": [ 00:17:51.866 "21691580-8a9b-4364-9d8d-e0239c01d385" 00:17:51.866 ], 00:17:51.866 "product_name": "Raid Volume", 00:17:51.866 "block_size": 512, 00:17:51.866 "num_blocks": 196608, 00:17:51.866 "uuid": "21691580-8a9b-4364-9d8d-e0239c01d385", 00:17:51.866 "assigned_rate_limits": { 00:17:51.866 "rw_ios_per_sec": 0, 00:17:51.866 "rw_mbytes_per_sec": 0, 00:17:51.866 "r_mbytes_per_sec": 0, 00:17:51.866 "w_mbytes_per_sec": 0 00:17:51.866 }, 00:17:51.866 "claimed": false, 00:17:51.866 "zoned": false, 00:17:51.866 "supported_io_types": { 00:17:51.866 "read": true, 00:17:51.866 "write": true, 00:17:51.866 "unmap": true, 00:17:51.866 "flush": true, 00:17:51.866 "reset": true, 00:17:51.866 "nvme_admin": false, 00:17:51.866 "nvme_io": false, 00:17:51.866 "nvme_io_md": false, 00:17:51.866 "write_zeroes": true, 00:17:51.866 "zcopy": false, 00:17:51.866 "get_zone_info": false, 00:17:51.866 "zone_management": false, 00:17:51.866 "zone_append": false, 00:17:51.866 "compare": false, 00:17:51.866 "compare_and_write": false, 00:17:51.866 "abort": false, 00:17:51.866 "seek_hole": false, 00:17:51.866 "seek_data": false, 00:17:51.866 "copy": false, 00:17:51.866 "nvme_iov_md": false 00:17:51.866 }, 00:17:51.866 "memory_domains": [ 00:17:51.866 { 00:17:51.866 "dma_device_id": "system", 00:17:51.866 "dma_device_type": 1 00:17:51.866 }, 00:17:51.866 { 00:17:51.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.866 "dma_device_type": 2 00:17:51.866 }, 00:17:51.866 { 00:17:51.866 "dma_device_id": "system", 00:17:51.866 "dma_device_type": 1 00:17:51.866 }, 00:17:51.866 { 00:17:51.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.866 "dma_device_type": 2 00:17:51.866 }, 00:17:51.866 { 00:17:51.866 "dma_device_id": "system", 00:17:51.866 "dma_device_type": 1 00:17:51.866 }, 00:17:51.866 { 00:17:51.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.866 "dma_device_type": 2 00:17:51.866 } 00:17:51.866 ], 00:17:51.866 "driver_specific": { 00:17:51.866 "raid": { 00:17:51.866 "uuid": "21691580-8a9b-4364-9d8d-e0239c01d385", 00:17:51.866 "strip_size_kb": 64, 00:17:51.866 "state": "online", 00:17:51.866 "raid_level": "concat", 00:17:51.866 "superblock": false, 00:17:51.866 "num_base_bdevs": 3, 00:17:51.866 "num_base_bdevs_discovered": 3, 00:17:51.866 "num_base_bdevs_operational": 3, 00:17:51.866 "base_bdevs_list": [ 00:17:51.866 { 00:17:51.866 "name": "NewBaseBdev", 00:17:51.866 "uuid": "dd0c970f-b874-4fcb-ac52-6a5ac9a61a94", 00:17:51.866 "is_configured": true, 00:17:51.866 "data_offset": 0, 00:17:51.866 "data_size": 65536 00:17:51.866 }, 00:17:51.866 { 00:17:51.866 "name": "BaseBdev2", 00:17:51.866 "uuid": "6575c3c4-7697-407d-be36-b2404f616959", 00:17:51.866 "is_configured": true, 00:17:51.866 "data_offset": 0, 00:17:51.866 "data_size": 65536 00:17:51.866 }, 00:17:51.866 { 00:17:51.866 "name": "BaseBdev3", 00:17:51.866 "uuid": "ae19f8ab-1ebc-4eda-8308-0ba180623b53", 00:17:51.866 "is_configured": true, 00:17:51.866 "data_offset": 0, 00:17:51.866 "data_size": 65536 00:17:51.866 } 00:17:51.866 ] 00:17:51.866 } 00:17:51.866 } 00:17:51.866 }' 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:51.866 BaseBdev2 00:17:51.866 BaseBdev3' 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.866 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.867 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.124 [2024-11-20 05:28:23.748538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.124 [2024-11-20 05:28:23.748574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.124 [2024-11-20 05:28:23.748658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.124 [2024-11-20 05:28:23.748721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.124 [2024-11-20 05:28:23.748731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64156 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 64156 ']' 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 64156 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64156 00:17:52.124 killing process with pid 64156 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64156' 00:17:52.124 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 64156 00:17:52.125 [2024-11-20 05:28:23.782399] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.125 05:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 64156 00:17:52.125 [2024-11-20 05:28:23.941433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:53.065 00:17:53.065 real 0m7.605s 00:17:53.065 user 0m12.207s 00:17:53.065 sys 0m1.292s 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:53.065 ************************************ 00:17:53.065 END TEST raid_state_function_test 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.065 ************************************ 00:17:53.065 05:28:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:53.065 05:28:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:53.065 05:28:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:53.065 05:28:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.065 ************************************ 00:17:53.065 START TEST raid_state_function_test_sb 00:17:53.065 ************************************ 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:53.065 Process raid pid: 64750 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64750 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64750' 00:17:53.065 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64750 00:17:53.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.066 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64750 ']' 00:17:53.066 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.066 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:53.066 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.066 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:53.066 05:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 05:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:53.066 [2024-11-20 05:28:24.674694] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:17:53.066 [2024-11-20 05:28:24.674826] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.066 [2024-11-20 05:28:24.837088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.323 [2024-11-20 05:28:24.959860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.323 [2024-11-20 05:28:25.116770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.323 [2024-11-20 05:28:25.116829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.888 [2024-11-20 05:28:25.524919] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.888 [2024-11-20 05:28:25.524981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.888 [2024-11-20 05:28:25.524992] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.888 [2024-11-20 05:28:25.525003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.888 [2024-11-20 05:28:25.525009] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.888 [2024-11-20 05:28:25.525018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.888 "name": "Existed_Raid", 00:17:53.888 "uuid": "bc06de23-6c8f-4312-a5b9-927db6fae402", 00:17:53.888 "strip_size_kb": 64, 00:17:53.888 "state": "configuring", 00:17:53.888 "raid_level": "concat", 00:17:53.888 "superblock": true, 00:17:53.888 "num_base_bdevs": 3, 00:17:53.888 "num_base_bdevs_discovered": 0, 00:17:53.888 "num_base_bdevs_operational": 3, 00:17:53.888 "base_bdevs_list": [ 00:17:53.888 { 00:17:53.888 "name": "BaseBdev1", 00:17:53.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.888 "is_configured": false, 00:17:53.888 "data_offset": 0, 00:17:53.888 "data_size": 0 00:17:53.888 }, 00:17:53.888 { 00:17:53.888 "name": "BaseBdev2", 00:17:53.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.888 "is_configured": false, 00:17:53.888 "data_offset": 0, 00:17:53.888 "data_size": 0 00:17:53.888 }, 00:17:53.888 { 00:17:53.888 "name": "BaseBdev3", 00:17:53.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.888 "is_configured": false, 00:17:53.888 "data_offset": 0, 00:17:53.888 "data_size": 0 00:17:53.888 } 00:17:53.888 ] 00:17:53.888 }' 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.888 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.146 [2024-11-20 05:28:25.864916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.146 [2024-11-20 05:28:25.865072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.146 [2024-11-20 05:28:25.872911] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.146 [2024-11-20 05:28:25.872956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.146 [2024-11-20 05:28:25.872966] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.146 [2024-11-20 05:28:25.872975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.146 [2024-11-20 05:28:25.872982] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.146 [2024-11-20 05:28:25.872992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.146 [2024-11-20 05:28:25.908999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.146 BaseBdev1 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.146 [ 00:17:54.146 { 00:17:54.146 "name": "BaseBdev1", 00:17:54.146 "aliases": [ 00:17:54.146 "5b68f68e-0ee3-40d5-a3c0-adb276ba4a25" 00:17:54.146 ], 00:17:54.146 "product_name": "Malloc disk", 00:17:54.146 "block_size": 512, 00:17:54.146 "num_blocks": 65536, 00:17:54.146 "uuid": "5b68f68e-0ee3-40d5-a3c0-adb276ba4a25", 00:17:54.146 "assigned_rate_limits": { 00:17:54.146 "rw_ios_per_sec": 0, 00:17:54.146 "rw_mbytes_per_sec": 0, 00:17:54.146 "r_mbytes_per_sec": 0, 00:17:54.146 "w_mbytes_per_sec": 0 00:17:54.146 }, 00:17:54.146 "claimed": true, 00:17:54.146 "claim_type": "exclusive_write", 00:17:54.146 "zoned": false, 00:17:54.146 "supported_io_types": { 00:17:54.146 "read": true, 00:17:54.146 "write": true, 00:17:54.146 "unmap": true, 00:17:54.146 "flush": true, 00:17:54.146 "reset": true, 00:17:54.146 "nvme_admin": false, 00:17:54.146 "nvme_io": false, 00:17:54.146 "nvme_io_md": false, 00:17:54.146 "write_zeroes": true, 00:17:54.146 "zcopy": true, 00:17:54.146 "get_zone_info": false, 00:17:54.146 "zone_management": false, 00:17:54.146 "zone_append": false, 00:17:54.146 "compare": false, 00:17:54.146 "compare_and_write": false, 00:17:54.146 "abort": true, 00:17:54.146 "seek_hole": false, 00:17:54.146 "seek_data": false, 00:17:54.146 "copy": true, 00:17:54.146 "nvme_iov_md": false 00:17:54.146 }, 00:17:54.146 "memory_domains": [ 00:17:54.146 { 00:17:54.146 "dma_device_id": "system", 00:17:54.146 "dma_device_type": 1 00:17:54.146 }, 00:17:54.146 { 00:17:54.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.146 "dma_device_type": 2 00:17:54.146 } 00:17:54.146 ], 00:17:54.146 "driver_specific": {} 00:17:54.146 } 00:17:54.146 ] 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.146 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.147 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.147 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.147 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.147 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.147 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.147 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.147 "name": "Existed_Raid", 00:17:54.147 "uuid": "6b883701-06b1-494e-8294-f8bc9cf6bdb9", 00:17:54.147 "strip_size_kb": 64, 00:17:54.147 "state": "configuring", 00:17:54.147 "raid_level": "concat", 00:17:54.147 "superblock": true, 00:17:54.147 "num_base_bdevs": 3, 00:17:54.147 "num_base_bdevs_discovered": 1, 00:17:54.147 "num_base_bdevs_operational": 3, 00:17:54.147 "base_bdevs_list": [ 00:17:54.147 { 00:17:54.147 "name": "BaseBdev1", 00:17:54.147 "uuid": "5b68f68e-0ee3-40d5-a3c0-adb276ba4a25", 00:17:54.147 "is_configured": true, 00:17:54.147 "data_offset": 2048, 00:17:54.147 "data_size": 63488 00:17:54.147 }, 00:17:54.147 { 00:17:54.147 "name": "BaseBdev2", 00:17:54.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.147 "is_configured": false, 00:17:54.147 "data_offset": 0, 00:17:54.147 "data_size": 0 00:17:54.147 }, 00:17:54.147 { 00:17:54.147 "name": "BaseBdev3", 00:17:54.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.147 "is_configured": false, 00:17:54.147 "data_offset": 0, 00:17:54.147 "data_size": 0 00:17:54.147 } 00:17:54.147 ] 00:17:54.147 }' 00:17:54.147 05:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.147 05:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.421 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:54.421 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.421 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.421 [2024-11-20 05:28:26.241133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.421 [2024-11-20 05:28:26.241309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:54.421 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.421 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:54.421 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.421 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.421 [2024-11-20 05:28:26.249200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.421 [2024-11-20 05:28:26.251286] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.421 [2024-11-20 05:28:26.251422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.421 [2024-11-20 05:28:26.251488] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.421 [2024-11-20 05:28:26.251516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.421 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.421 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.678 "name": "Existed_Raid", 00:17:54.678 "uuid": "31d11dff-e714-44a5-9237-c9d230700efa", 00:17:54.678 "strip_size_kb": 64, 00:17:54.678 "state": "configuring", 00:17:54.678 "raid_level": "concat", 00:17:54.678 "superblock": true, 00:17:54.678 "num_base_bdevs": 3, 00:17:54.678 "num_base_bdevs_discovered": 1, 00:17:54.678 "num_base_bdevs_operational": 3, 00:17:54.678 "base_bdevs_list": [ 00:17:54.678 { 00:17:54.678 "name": "BaseBdev1", 00:17:54.678 "uuid": "5b68f68e-0ee3-40d5-a3c0-adb276ba4a25", 00:17:54.678 "is_configured": true, 00:17:54.678 "data_offset": 2048, 00:17:54.678 "data_size": 63488 00:17:54.678 }, 00:17:54.678 { 00:17:54.678 "name": "BaseBdev2", 00:17:54.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.678 "is_configured": false, 00:17:54.678 "data_offset": 0, 00:17:54.678 "data_size": 0 00:17:54.678 }, 00:17:54.678 { 00:17:54.678 "name": "BaseBdev3", 00:17:54.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.678 "is_configured": false, 00:17:54.678 "data_offset": 0, 00:17:54.678 "data_size": 0 00:17:54.678 } 00:17:54.678 ] 00:17:54.678 }' 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.678 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.935 [2024-11-20 05:28:26.614277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.935 BaseBdev2 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:54.935 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.936 [ 00:17:54.936 { 00:17:54.936 "name": "BaseBdev2", 00:17:54.936 "aliases": [ 00:17:54.936 "c04e6c0d-4659-49cb-a3b2-353d3da77d2e" 00:17:54.936 ], 00:17:54.936 "product_name": "Malloc disk", 00:17:54.936 "block_size": 512, 00:17:54.936 "num_blocks": 65536, 00:17:54.936 "uuid": "c04e6c0d-4659-49cb-a3b2-353d3da77d2e", 00:17:54.936 "assigned_rate_limits": { 00:17:54.936 "rw_ios_per_sec": 0, 00:17:54.936 "rw_mbytes_per_sec": 0, 00:17:54.936 "r_mbytes_per_sec": 0, 00:17:54.936 "w_mbytes_per_sec": 0 00:17:54.936 }, 00:17:54.936 "claimed": true, 00:17:54.936 "claim_type": "exclusive_write", 00:17:54.936 "zoned": false, 00:17:54.936 "supported_io_types": { 00:17:54.936 "read": true, 00:17:54.936 "write": true, 00:17:54.936 "unmap": true, 00:17:54.936 "flush": true, 00:17:54.936 "reset": true, 00:17:54.936 "nvme_admin": false, 00:17:54.936 "nvme_io": false, 00:17:54.936 "nvme_io_md": false, 00:17:54.936 "write_zeroes": true, 00:17:54.936 "zcopy": true, 00:17:54.936 "get_zone_info": false, 00:17:54.936 "zone_management": false, 00:17:54.936 "zone_append": false, 00:17:54.936 "compare": false, 00:17:54.936 "compare_and_write": false, 00:17:54.936 "abort": true, 00:17:54.936 "seek_hole": false, 00:17:54.936 "seek_data": false, 00:17:54.936 "copy": true, 00:17:54.936 "nvme_iov_md": false 00:17:54.936 }, 00:17:54.936 "memory_domains": [ 00:17:54.936 { 00:17:54.936 "dma_device_id": "system", 00:17:54.936 "dma_device_type": 1 00:17:54.936 }, 00:17:54.936 { 00:17:54.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.936 "dma_device_type": 2 00:17:54.936 } 00:17:54.936 ], 00:17:54.936 "driver_specific": {} 00:17:54.936 } 00:17:54.936 ] 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.936 "name": "Existed_Raid", 00:17:54.936 "uuid": "31d11dff-e714-44a5-9237-c9d230700efa", 00:17:54.936 "strip_size_kb": 64, 00:17:54.936 "state": "configuring", 00:17:54.936 "raid_level": "concat", 00:17:54.936 "superblock": true, 00:17:54.936 "num_base_bdevs": 3, 00:17:54.936 "num_base_bdevs_discovered": 2, 00:17:54.936 "num_base_bdevs_operational": 3, 00:17:54.936 "base_bdevs_list": [ 00:17:54.936 { 00:17:54.936 "name": "BaseBdev1", 00:17:54.936 "uuid": "5b68f68e-0ee3-40d5-a3c0-adb276ba4a25", 00:17:54.936 "is_configured": true, 00:17:54.936 "data_offset": 2048, 00:17:54.936 "data_size": 63488 00:17:54.936 }, 00:17:54.936 { 00:17:54.936 "name": "BaseBdev2", 00:17:54.936 "uuid": "c04e6c0d-4659-49cb-a3b2-353d3da77d2e", 00:17:54.936 "is_configured": true, 00:17:54.936 "data_offset": 2048, 00:17:54.936 "data_size": 63488 00:17:54.936 }, 00:17:54.936 { 00:17:54.936 "name": "BaseBdev3", 00:17:54.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.936 "is_configured": false, 00:17:54.936 "data_offset": 0, 00:17:54.936 "data_size": 0 00:17:54.936 } 00:17:54.936 ] 00:17:54.936 }' 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.936 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.193 05:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:55.193 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.193 05:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.193 [2024-11-20 05:28:27.013213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.193 [2024-11-20 05:28:27.013503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:55.193 [2024-11-20 05:28:27.013526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:55.193 [2024-11-20 05:28:27.013966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:55.193 [2024-11-20 05:28:27.014131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:55.193 [2024-11-20 05:28:27.014140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:55.193 BaseBdev3 00:17:55.193 [2024-11-20 05:28:27.014285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.193 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.451 [ 00:17:55.451 { 00:17:55.451 "name": "BaseBdev3", 00:17:55.451 "aliases": [ 00:17:55.451 "becee180-6f67-4823-98c0-1a395810afff" 00:17:55.451 ], 00:17:55.451 "product_name": "Malloc disk", 00:17:55.451 "block_size": 512, 00:17:55.451 "num_blocks": 65536, 00:17:55.451 "uuid": "becee180-6f67-4823-98c0-1a395810afff", 00:17:55.451 "assigned_rate_limits": { 00:17:55.451 "rw_ios_per_sec": 0, 00:17:55.451 "rw_mbytes_per_sec": 0, 00:17:55.451 "r_mbytes_per_sec": 0, 00:17:55.451 "w_mbytes_per_sec": 0 00:17:55.451 }, 00:17:55.451 "claimed": true, 00:17:55.451 "claim_type": "exclusive_write", 00:17:55.451 "zoned": false, 00:17:55.451 "supported_io_types": { 00:17:55.451 "read": true, 00:17:55.451 "write": true, 00:17:55.451 "unmap": true, 00:17:55.451 "flush": true, 00:17:55.451 "reset": true, 00:17:55.451 "nvme_admin": false, 00:17:55.451 "nvme_io": false, 00:17:55.451 "nvme_io_md": false, 00:17:55.451 "write_zeroes": true, 00:17:55.451 "zcopy": true, 00:17:55.451 "get_zone_info": false, 00:17:55.451 "zone_management": false, 00:17:55.451 "zone_append": false, 00:17:55.451 "compare": false, 00:17:55.451 "compare_and_write": false, 00:17:55.451 "abort": true, 00:17:55.451 "seek_hole": false, 00:17:55.451 "seek_data": false, 00:17:55.451 "copy": true, 00:17:55.451 "nvme_iov_md": false 00:17:55.451 }, 00:17:55.451 "memory_domains": [ 00:17:55.451 { 00:17:55.451 "dma_device_id": "system", 00:17:55.451 "dma_device_type": 1 00:17:55.451 }, 00:17:55.451 { 00:17:55.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.451 "dma_device_type": 2 00:17:55.451 } 00:17:55.451 ], 00:17:55.451 "driver_specific": {} 00:17:55.451 } 00:17:55.451 ] 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.451 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.451 "name": "Existed_Raid", 00:17:55.451 "uuid": "31d11dff-e714-44a5-9237-c9d230700efa", 00:17:55.451 "strip_size_kb": 64, 00:17:55.451 "state": "online", 00:17:55.451 "raid_level": "concat", 00:17:55.451 "superblock": true, 00:17:55.451 "num_base_bdevs": 3, 00:17:55.451 "num_base_bdevs_discovered": 3, 00:17:55.452 "num_base_bdevs_operational": 3, 00:17:55.452 "base_bdevs_list": [ 00:17:55.452 { 00:17:55.452 "name": "BaseBdev1", 00:17:55.452 "uuid": "5b68f68e-0ee3-40d5-a3c0-adb276ba4a25", 00:17:55.452 "is_configured": true, 00:17:55.452 "data_offset": 2048, 00:17:55.452 "data_size": 63488 00:17:55.452 }, 00:17:55.452 { 00:17:55.452 "name": "BaseBdev2", 00:17:55.452 "uuid": "c04e6c0d-4659-49cb-a3b2-353d3da77d2e", 00:17:55.452 "is_configured": true, 00:17:55.452 "data_offset": 2048, 00:17:55.452 "data_size": 63488 00:17:55.452 }, 00:17:55.452 { 00:17:55.452 "name": "BaseBdev3", 00:17:55.452 "uuid": "becee180-6f67-4823-98c0-1a395810afff", 00:17:55.452 "is_configured": true, 00:17:55.452 "data_offset": 2048, 00:17:55.452 "data_size": 63488 00:17:55.452 } 00:17:55.452 ] 00:17:55.452 }' 00:17:55.452 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.452 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.710 [2024-11-20 05:28:27.349721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.710 "name": "Existed_Raid", 00:17:55.710 "aliases": [ 00:17:55.710 "31d11dff-e714-44a5-9237-c9d230700efa" 00:17:55.710 ], 00:17:55.710 "product_name": "Raid Volume", 00:17:55.710 "block_size": 512, 00:17:55.710 "num_blocks": 190464, 00:17:55.710 "uuid": "31d11dff-e714-44a5-9237-c9d230700efa", 00:17:55.710 "assigned_rate_limits": { 00:17:55.710 "rw_ios_per_sec": 0, 00:17:55.710 "rw_mbytes_per_sec": 0, 00:17:55.710 "r_mbytes_per_sec": 0, 00:17:55.710 "w_mbytes_per_sec": 0 00:17:55.710 }, 00:17:55.710 "claimed": false, 00:17:55.710 "zoned": false, 00:17:55.710 "supported_io_types": { 00:17:55.710 "read": true, 00:17:55.710 "write": true, 00:17:55.710 "unmap": true, 00:17:55.710 "flush": true, 00:17:55.710 "reset": true, 00:17:55.710 "nvme_admin": false, 00:17:55.710 "nvme_io": false, 00:17:55.710 "nvme_io_md": false, 00:17:55.710 "write_zeroes": true, 00:17:55.710 "zcopy": false, 00:17:55.710 "get_zone_info": false, 00:17:55.710 "zone_management": false, 00:17:55.710 "zone_append": false, 00:17:55.710 "compare": false, 00:17:55.710 "compare_and_write": false, 00:17:55.710 "abort": false, 00:17:55.710 "seek_hole": false, 00:17:55.710 "seek_data": false, 00:17:55.710 "copy": false, 00:17:55.710 "nvme_iov_md": false 00:17:55.710 }, 00:17:55.710 "memory_domains": [ 00:17:55.710 { 00:17:55.710 "dma_device_id": "system", 00:17:55.710 "dma_device_type": 1 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.710 "dma_device_type": 2 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "dma_device_id": "system", 00:17:55.710 "dma_device_type": 1 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.710 "dma_device_type": 2 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "dma_device_id": "system", 00:17:55.710 "dma_device_type": 1 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.710 "dma_device_type": 2 00:17:55.710 } 00:17:55.710 ], 00:17:55.710 "driver_specific": { 00:17:55.710 "raid": { 00:17:55.710 "uuid": "31d11dff-e714-44a5-9237-c9d230700efa", 00:17:55.710 "strip_size_kb": 64, 00:17:55.710 "state": "online", 00:17:55.710 "raid_level": "concat", 00:17:55.710 "superblock": true, 00:17:55.710 "num_base_bdevs": 3, 00:17:55.710 "num_base_bdevs_discovered": 3, 00:17:55.710 "num_base_bdevs_operational": 3, 00:17:55.710 "base_bdevs_list": [ 00:17:55.710 { 00:17:55.710 "name": "BaseBdev1", 00:17:55.710 "uuid": "5b68f68e-0ee3-40d5-a3c0-adb276ba4a25", 00:17:55.710 "is_configured": true, 00:17:55.710 "data_offset": 2048, 00:17:55.710 "data_size": 63488 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "name": "BaseBdev2", 00:17:55.710 "uuid": "c04e6c0d-4659-49cb-a3b2-353d3da77d2e", 00:17:55.710 "is_configured": true, 00:17:55.710 "data_offset": 2048, 00:17:55.710 "data_size": 63488 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "name": "BaseBdev3", 00:17:55.710 "uuid": "becee180-6f67-4823-98c0-1a395810afff", 00:17:55.710 "is_configured": true, 00:17:55.710 "data_offset": 2048, 00:17:55.710 "data_size": 63488 00:17:55.710 } 00:17:55.710 ] 00:17:55.710 } 00:17:55.710 } 00:17:55.710 }' 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.710 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:55.711 BaseBdev2 00:17:55.711 BaseBdev3' 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.711 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.969 [2024-11-20 05:28:27.545477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.969 [2024-11-20 05:28:27.545510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.969 [2024-11-20 05:28:27.545569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.969 "name": "Existed_Raid", 00:17:55.969 "uuid": "31d11dff-e714-44a5-9237-c9d230700efa", 00:17:55.969 "strip_size_kb": 64, 00:17:55.969 "state": "offline", 00:17:55.969 "raid_level": "concat", 00:17:55.969 "superblock": true, 00:17:55.969 "num_base_bdevs": 3, 00:17:55.969 "num_base_bdevs_discovered": 2, 00:17:55.969 "num_base_bdevs_operational": 2, 00:17:55.969 "base_bdevs_list": [ 00:17:55.969 { 00:17:55.969 "name": null, 00:17:55.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.969 "is_configured": false, 00:17:55.969 "data_offset": 0, 00:17:55.969 "data_size": 63488 00:17:55.969 }, 00:17:55.969 { 00:17:55.969 "name": "BaseBdev2", 00:17:55.969 "uuid": "c04e6c0d-4659-49cb-a3b2-353d3da77d2e", 00:17:55.969 "is_configured": true, 00:17:55.969 "data_offset": 2048, 00:17:55.969 "data_size": 63488 00:17:55.969 }, 00:17:55.969 { 00:17:55.969 "name": "BaseBdev3", 00:17:55.969 "uuid": "becee180-6f67-4823-98c0-1a395810afff", 00:17:55.969 "is_configured": true, 00:17:55.969 "data_offset": 2048, 00:17:55.969 "data_size": 63488 00:17:55.969 } 00:17:55.969 ] 00:17:55.969 }' 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.969 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.226 05:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.226 [2024-11-20 05:28:27.942831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:56.226 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:56.227 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:56.227 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.227 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.227 [2024-11-20 05:28:28.057823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:56.486 [2024-11-20 05:28:28.057985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:56.486 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.486 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:56.486 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.487 BaseBdev2 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.487 [ 00:17:56.487 { 00:17:56.487 "name": "BaseBdev2", 00:17:56.487 "aliases": [ 00:17:56.487 "f5fefb4f-39a2-4fd3-b735-769e77c5c467" 00:17:56.487 ], 00:17:56.487 "product_name": "Malloc disk", 00:17:56.487 "block_size": 512, 00:17:56.487 "num_blocks": 65536, 00:17:56.487 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:56.487 "assigned_rate_limits": { 00:17:56.487 "rw_ios_per_sec": 0, 00:17:56.487 "rw_mbytes_per_sec": 0, 00:17:56.487 "r_mbytes_per_sec": 0, 00:17:56.487 "w_mbytes_per_sec": 0 00:17:56.487 }, 00:17:56.487 "claimed": false, 00:17:56.487 "zoned": false, 00:17:56.487 "supported_io_types": { 00:17:56.487 "read": true, 00:17:56.487 "write": true, 00:17:56.487 "unmap": true, 00:17:56.487 "flush": true, 00:17:56.487 "reset": true, 00:17:56.487 "nvme_admin": false, 00:17:56.487 "nvme_io": false, 00:17:56.487 "nvme_io_md": false, 00:17:56.487 "write_zeroes": true, 00:17:56.487 "zcopy": true, 00:17:56.487 "get_zone_info": false, 00:17:56.487 "zone_management": false, 00:17:56.487 "zone_append": false, 00:17:56.487 "compare": false, 00:17:56.487 "compare_and_write": false, 00:17:56.487 "abort": true, 00:17:56.487 "seek_hole": false, 00:17:56.487 "seek_data": false, 00:17:56.487 "copy": true, 00:17:56.487 "nvme_iov_md": false 00:17:56.487 }, 00:17:56.487 "memory_domains": [ 00:17:56.487 { 00:17:56.487 "dma_device_id": "system", 00:17:56.487 "dma_device_type": 1 00:17:56.487 }, 00:17:56.487 { 00:17:56.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.487 "dma_device_type": 2 00:17:56.487 } 00:17:56.487 ], 00:17:56.487 "driver_specific": {} 00:17:56.487 } 00:17:56.487 ] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.487 BaseBdev3 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.487 [ 00:17:56.487 { 00:17:56.487 "name": "BaseBdev3", 00:17:56.487 "aliases": [ 00:17:56.487 "e3298ab2-32ed-40e2-a91b-6e489c5b4787" 00:17:56.487 ], 00:17:56.487 "product_name": "Malloc disk", 00:17:56.487 "block_size": 512, 00:17:56.487 "num_blocks": 65536, 00:17:56.487 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:56.487 "assigned_rate_limits": { 00:17:56.487 "rw_ios_per_sec": 0, 00:17:56.487 "rw_mbytes_per_sec": 0, 00:17:56.487 "r_mbytes_per_sec": 0, 00:17:56.487 "w_mbytes_per_sec": 0 00:17:56.487 }, 00:17:56.487 "claimed": false, 00:17:56.487 "zoned": false, 00:17:56.487 "supported_io_types": { 00:17:56.487 "read": true, 00:17:56.487 "write": true, 00:17:56.487 "unmap": true, 00:17:56.487 "flush": true, 00:17:56.487 "reset": true, 00:17:56.487 "nvme_admin": false, 00:17:56.487 "nvme_io": false, 00:17:56.487 "nvme_io_md": false, 00:17:56.487 "write_zeroes": true, 00:17:56.487 "zcopy": true, 00:17:56.487 "get_zone_info": false, 00:17:56.487 "zone_management": false, 00:17:56.487 "zone_append": false, 00:17:56.487 "compare": false, 00:17:56.487 "compare_and_write": false, 00:17:56.487 "abort": true, 00:17:56.487 "seek_hole": false, 00:17:56.487 "seek_data": false, 00:17:56.487 "copy": true, 00:17:56.487 "nvme_iov_md": false 00:17:56.487 }, 00:17:56.487 "memory_domains": [ 00:17:56.487 { 00:17:56.487 "dma_device_id": "system", 00:17:56.487 "dma_device_type": 1 00:17:56.487 }, 00:17:56.487 { 00:17:56.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.487 "dma_device_type": 2 00:17:56.487 } 00:17:56.487 ], 00:17:56.487 "driver_specific": {} 00:17:56.487 } 00:17:56.487 ] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.487 [2024-11-20 05:28:28.275117] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.487 [2024-11-20 05:28:28.275269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.487 [2024-11-20 05:28:28.275342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.487 [2024-11-20 05:28:28.277344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.487 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.488 "name": "Existed_Raid", 00:17:56.488 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:56.488 "strip_size_kb": 64, 00:17:56.488 "state": "configuring", 00:17:56.488 "raid_level": "concat", 00:17:56.488 "superblock": true, 00:17:56.488 "num_base_bdevs": 3, 00:17:56.488 "num_base_bdevs_discovered": 2, 00:17:56.488 "num_base_bdevs_operational": 3, 00:17:56.488 "base_bdevs_list": [ 00:17:56.488 { 00:17:56.488 "name": "BaseBdev1", 00:17:56.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.488 "is_configured": false, 00:17:56.488 "data_offset": 0, 00:17:56.488 "data_size": 0 00:17:56.488 }, 00:17:56.488 { 00:17:56.488 "name": "BaseBdev2", 00:17:56.488 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:56.488 "is_configured": true, 00:17:56.488 "data_offset": 2048, 00:17:56.488 "data_size": 63488 00:17:56.488 }, 00:17:56.488 { 00:17:56.488 "name": "BaseBdev3", 00:17:56.488 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:56.488 "is_configured": true, 00:17:56.488 "data_offset": 2048, 00:17:56.488 "data_size": 63488 00:17:56.488 } 00:17:56.488 ] 00:17:56.488 }' 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.488 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.117 [2024-11-20 05:28:28.651211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.117 "name": "Existed_Raid", 00:17:57.117 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:57.117 "strip_size_kb": 64, 00:17:57.117 "state": "configuring", 00:17:57.117 "raid_level": "concat", 00:17:57.117 "superblock": true, 00:17:57.117 "num_base_bdevs": 3, 00:17:57.117 "num_base_bdevs_discovered": 1, 00:17:57.117 "num_base_bdevs_operational": 3, 00:17:57.117 "base_bdevs_list": [ 00:17:57.117 { 00:17:57.117 "name": "BaseBdev1", 00:17:57.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.117 "is_configured": false, 00:17:57.117 "data_offset": 0, 00:17:57.117 "data_size": 0 00:17:57.117 }, 00:17:57.117 { 00:17:57.117 "name": null, 00:17:57.117 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:57.117 "is_configured": false, 00:17:57.117 "data_offset": 0, 00:17:57.117 "data_size": 63488 00:17:57.117 }, 00:17:57.117 { 00:17:57.117 "name": "BaseBdev3", 00:17:57.117 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:57.117 "is_configured": true, 00:17:57.117 "data_offset": 2048, 00:17:57.117 "data_size": 63488 00:17:57.117 } 00:17:57.117 ] 00:17:57.117 }' 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.117 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.377 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.377 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.377 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.377 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:57.377 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.377 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:57.377 05:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:57.377 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.377 05:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.377 [2024-11-20 05:28:29.020148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.377 BaseBdev1 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.377 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.377 [ 00:17:57.377 { 00:17:57.377 "name": "BaseBdev1", 00:17:57.377 "aliases": [ 00:17:57.377 "d66024dd-974e-4651-b2a4-1d012404fc03" 00:17:57.377 ], 00:17:57.377 "product_name": "Malloc disk", 00:17:57.377 "block_size": 512, 00:17:57.377 "num_blocks": 65536, 00:17:57.377 "uuid": "d66024dd-974e-4651-b2a4-1d012404fc03", 00:17:57.377 "assigned_rate_limits": { 00:17:57.377 "rw_ios_per_sec": 0, 00:17:57.377 "rw_mbytes_per_sec": 0, 00:17:57.377 "r_mbytes_per_sec": 0, 00:17:57.377 "w_mbytes_per_sec": 0 00:17:57.377 }, 00:17:57.377 "claimed": true, 00:17:57.377 "claim_type": "exclusive_write", 00:17:57.377 "zoned": false, 00:17:57.377 "supported_io_types": { 00:17:57.377 "read": true, 00:17:57.377 "write": true, 00:17:57.377 "unmap": true, 00:17:57.377 "flush": true, 00:17:57.377 "reset": true, 00:17:57.377 "nvme_admin": false, 00:17:57.377 "nvme_io": false, 00:17:57.377 "nvme_io_md": false, 00:17:57.377 "write_zeroes": true, 00:17:57.377 "zcopy": true, 00:17:57.377 "get_zone_info": false, 00:17:57.377 "zone_management": false, 00:17:57.377 "zone_append": false, 00:17:57.377 "compare": false, 00:17:57.377 "compare_and_write": false, 00:17:57.377 "abort": true, 00:17:57.377 "seek_hole": false, 00:17:57.377 "seek_data": false, 00:17:57.377 "copy": true, 00:17:57.378 "nvme_iov_md": false 00:17:57.378 }, 00:17:57.378 "memory_domains": [ 00:17:57.378 { 00:17:57.378 "dma_device_id": "system", 00:17:57.378 "dma_device_type": 1 00:17:57.378 }, 00:17:57.378 { 00:17:57.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.378 "dma_device_type": 2 00:17:57.378 } 00:17:57.378 ], 00:17:57.378 "driver_specific": {} 00:17:57.378 } 00:17:57.378 ] 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.378 "name": "Existed_Raid", 00:17:57.378 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:57.378 "strip_size_kb": 64, 00:17:57.378 "state": "configuring", 00:17:57.378 "raid_level": "concat", 00:17:57.378 "superblock": true, 00:17:57.378 "num_base_bdevs": 3, 00:17:57.378 "num_base_bdevs_discovered": 2, 00:17:57.378 "num_base_bdevs_operational": 3, 00:17:57.378 "base_bdevs_list": [ 00:17:57.378 { 00:17:57.378 "name": "BaseBdev1", 00:17:57.378 "uuid": "d66024dd-974e-4651-b2a4-1d012404fc03", 00:17:57.378 "is_configured": true, 00:17:57.378 "data_offset": 2048, 00:17:57.378 "data_size": 63488 00:17:57.378 }, 00:17:57.378 { 00:17:57.378 "name": null, 00:17:57.378 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:57.378 "is_configured": false, 00:17:57.378 "data_offset": 0, 00:17:57.378 "data_size": 63488 00:17:57.378 }, 00:17:57.378 { 00:17:57.378 "name": "BaseBdev3", 00:17:57.378 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:57.378 "is_configured": true, 00:17:57.378 "data_offset": 2048, 00:17:57.378 "data_size": 63488 00:17:57.378 } 00:17:57.378 ] 00:17:57.378 }' 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.378 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.637 [2024-11-20 05:28:29.408314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.637 "name": "Existed_Raid", 00:17:57.637 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:57.637 "strip_size_kb": 64, 00:17:57.637 "state": "configuring", 00:17:57.637 "raid_level": "concat", 00:17:57.637 "superblock": true, 00:17:57.637 "num_base_bdevs": 3, 00:17:57.637 "num_base_bdevs_discovered": 1, 00:17:57.637 "num_base_bdevs_operational": 3, 00:17:57.637 "base_bdevs_list": [ 00:17:57.637 { 00:17:57.637 "name": "BaseBdev1", 00:17:57.637 "uuid": "d66024dd-974e-4651-b2a4-1d012404fc03", 00:17:57.637 "is_configured": true, 00:17:57.637 "data_offset": 2048, 00:17:57.637 "data_size": 63488 00:17:57.637 }, 00:17:57.637 { 00:17:57.637 "name": null, 00:17:57.637 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:57.637 "is_configured": false, 00:17:57.637 "data_offset": 0, 00:17:57.637 "data_size": 63488 00:17:57.637 }, 00:17:57.637 { 00:17:57.637 "name": null, 00:17:57.637 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:57.637 "is_configured": false, 00:17:57.637 "data_offset": 0, 00:17:57.637 "data_size": 63488 00:17:57.637 } 00:17:57.637 ] 00:17:57.637 }' 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.637 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.204 [2024-11-20 05:28:29.828433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.204 "name": "Existed_Raid", 00:17:58.204 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:58.204 "strip_size_kb": 64, 00:17:58.204 "state": "configuring", 00:17:58.204 "raid_level": "concat", 00:17:58.204 "superblock": true, 00:17:58.204 "num_base_bdevs": 3, 00:17:58.204 "num_base_bdevs_discovered": 2, 00:17:58.204 "num_base_bdevs_operational": 3, 00:17:58.204 "base_bdevs_list": [ 00:17:58.204 { 00:17:58.204 "name": "BaseBdev1", 00:17:58.204 "uuid": "d66024dd-974e-4651-b2a4-1d012404fc03", 00:17:58.204 "is_configured": true, 00:17:58.204 "data_offset": 2048, 00:17:58.204 "data_size": 63488 00:17:58.204 }, 00:17:58.204 { 00:17:58.204 "name": null, 00:17:58.204 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:58.204 "is_configured": false, 00:17:58.204 "data_offset": 0, 00:17:58.204 "data_size": 63488 00:17:58.204 }, 00:17:58.204 { 00:17:58.204 "name": "BaseBdev3", 00:17:58.204 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:58.204 "is_configured": true, 00:17:58.204 "data_offset": 2048, 00:17:58.204 "data_size": 63488 00:17:58.204 } 00:17:58.204 ] 00:17:58.204 }' 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.204 05:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.464 [2024-11-20 05:28:30.220550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:58.464 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.465 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.725 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.725 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.725 "name": "Existed_Raid", 00:17:58.725 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:58.725 "strip_size_kb": 64, 00:17:58.725 "state": "configuring", 00:17:58.725 "raid_level": "concat", 00:17:58.725 "superblock": true, 00:17:58.725 "num_base_bdevs": 3, 00:17:58.725 "num_base_bdevs_discovered": 1, 00:17:58.725 "num_base_bdevs_operational": 3, 00:17:58.725 "base_bdevs_list": [ 00:17:58.725 { 00:17:58.725 "name": null, 00:17:58.725 "uuid": "d66024dd-974e-4651-b2a4-1d012404fc03", 00:17:58.725 "is_configured": false, 00:17:58.725 "data_offset": 0, 00:17:58.725 "data_size": 63488 00:17:58.725 }, 00:17:58.725 { 00:17:58.725 "name": null, 00:17:58.725 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:58.725 "is_configured": false, 00:17:58.725 "data_offset": 0, 00:17:58.725 "data_size": 63488 00:17:58.725 }, 00:17:58.725 { 00:17:58.725 "name": "BaseBdev3", 00:17:58.725 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:58.725 "is_configured": true, 00:17:58.725 "data_offset": 2048, 00:17:58.725 "data_size": 63488 00:17:58.725 } 00:17:58.725 ] 00:17:58.725 }' 00:17:58.725 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.725 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.984 [2024-11-20 05:28:30.638423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.984 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.984 "name": "Existed_Raid", 00:17:58.984 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:58.984 "strip_size_kb": 64, 00:17:58.984 "state": "configuring", 00:17:58.984 "raid_level": "concat", 00:17:58.984 "superblock": true, 00:17:58.984 "num_base_bdevs": 3, 00:17:58.984 "num_base_bdevs_discovered": 2, 00:17:58.984 "num_base_bdevs_operational": 3, 00:17:58.984 "base_bdevs_list": [ 00:17:58.984 { 00:17:58.984 "name": null, 00:17:58.984 "uuid": "d66024dd-974e-4651-b2a4-1d012404fc03", 00:17:58.984 "is_configured": false, 00:17:58.984 "data_offset": 0, 00:17:58.984 "data_size": 63488 00:17:58.984 }, 00:17:58.984 { 00:17:58.984 "name": "BaseBdev2", 00:17:58.984 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:58.985 "is_configured": true, 00:17:58.985 "data_offset": 2048, 00:17:58.985 "data_size": 63488 00:17:58.985 }, 00:17:58.985 { 00:17:58.985 "name": "BaseBdev3", 00:17:58.985 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:58.985 "is_configured": true, 00:17:58.985 "data_offset": 2048, 00:17:58.985 "data_size": 63488 00:17:58.985 } 00:17:58.985 ] 00:17:58.985 }' 00:17:58.985 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.985 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.243 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.243 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.243 05:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:59.243 05:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d66024dd-974e-4651-b2a4-1d012404fc03 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.243 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 [2024-11-20 05:28:31.079541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:59.502 NewBaseBdev 00:17:59.502 [2024-11-20 05:28:31.079948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:59.502 [2024-11-20 05:28:31.079969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:59.502 [2024-11-20 05:28:31.080225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:59.502 [2024-11-20 05:28:31.080344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:59.502 [2024-11-20 05:28:31.080352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:59.502 [2024-11-20 05:28:31.080485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 [ 00:17:59.502 { 00:17:59.502 "name": "NewBaseBdev", 00:17:59.502 "aliases": [ 00:17:59.502 "d66024dd-974e-4651-b2a4-1d012404fc03" 00:17:59.502 ], 00:17:59.502 "product_name": "Malloc disk", 00:17:59.502 "block_size": 512, 00:17:59.502 "num_blocks": 65536, 00:17:59.502 "uuid": "d66024dd-974e-4651-b2a4-1d012404fc03", 00:17:59.502 "assigned_rate_limits": { 00:17:59.502 "rw_ios_per_sec": 0, 00:17:59.502 "rw_mbytes_per_sec": 0, 00:17:59.502 "r_mbytes_per_sec": 0, 00:17:59.502 "w_mbytes_per_sec": 0 00:17:59.502 }, 00:17:59.502 "claimed": true, 00:17:59.502 "claim_type": "exclusive_write", 00:17:59.502 "zoned": false, 00:17:59.502 "supported_io_types": { 00:17:59.502 "read": true, 00:17:59.502 "write": true, 00:17:59.502 "unmap": true, 00:17:59.502 "flush": true, 00:17:59.502 "reset": true, 00:17:59.502 "nvme_admin": false, 00:17:59.502 "nvme_io": false, 00:17:59.502 "nvme_io_md": false, 00:17:59.502 "write_zeroes": true, 00:17:59.502 "zcopy": true, 00:17:59.502 "get_zone_info": false, 00:17:59.502 "zone_management": false, 00:17:59.502 "zone_append": false, 00:17:59.502 "compare": false, 00:17:59.502 "compare_and_write": false, 00:17:59.502 "abort": true, 00:17:59.502 "seek_hole": false, 00:17:59.502 "seek_data": false, 00:17:59.502 "copy": true, 00:17:59.502 "nvme_iov_md": false 00:17:59.502 }, 00:17:59.502 "memory_domains": [ 00:17:59.502 { 00:17:59.502 "dma_device_id": "system", 00:17:59.502 "dma_device_type": 1 00:17:59.502 }, 00:17:59.502 { 00:17:59.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.502 "dma_device_type": 2 00:17:59.502 } 00:17:59.502 ], 00:17:59.502 "driver_specific": {} 00:17:59.502 } 00:17:59.502 ] 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.502 "name": "Existed_Raid", 00:17:59.502 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:59.502 "strip_size_kb": 64, 00:17:59.502 "state": "online", 00:17:59.502 "raid_level": "concat", 00:17:59.502 "superblock": true, 00:17:59.502 "num_base_bdevs": 3, 00:17:59.502 "num_base_bdevs_discovered": 3, 00:17:59.502 "num_base_bdevs_operational": 3, 00:17:59.502 "base_bdevs_list": [ 00:17:59.502 { 00:17:59.502 "name": "NewBaseBdev", 00:17:59.502 "uuid": "d66024dd-974e-4651-b2a4-1d012404fc03", 00:17:59.502 "is_configured": true, 00:17:59.502 "data_offset": 2048, 00:17:59.502 "data_size": 63488 00:17:59.502 }, 00:17:59.502 { 00:17:59.502 "name": "BaseBdev2", 00:17:59.502 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:59.502 "is_configured": true, 00:17:59.502 "data_offset": 2048, 00:17:59.502 "data_size": 63488 00:17:59.502 }, 00:17:59.502 { 00:17:59.502 "name": "BaseBdev3", 00:17:59.502 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:59.502 "is_configured": true, 00:17:59.502 "data_offset": 2048, 00:17:59.502 "data_size": 63488 00:17:59.502 } 00:17:59.502 ] 00:17:59.502 }' 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.502 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.761 [2024-11-20 05:28:31.439966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.761 "name": "Existed_Raid", 00:17:59.761 "aliases": [ 00:17:59.761 "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6" 00:17:59.761 ], 00:17:59.761 "product_name": "Raid Volume", 00:17:59.761 "block_size": 512, 00:17:59.761 "num_blocks": 190464, 00:17:59.761 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:59.761 "assigned_rate_limits": { 00:17:59.761 "rw_ios_per_sec": 0, 00:17:59.761 "rw_mbytes_per_sec": 0, 00:17:59.761 "r_mbytes_per_sec": 0, 00:17:59.761 "w_mbytes_per_sec": 0 00:17:59.761 }, 00:17:59.761 "claimed": false, 00:17:59.761 "zoned": false, 00:17:59.761 "supported_io_types": { 00:17:59.761 "read": true, 00:17:59.761 "write": true, 00:17:59.761 "unmap": true, 00:17:59.761 "flush": true, 00:17:59.761 "reset": true, 00:17:59.761 "nvme_admin": false, 00:17:59.761 "nvme_io": false, 00:17:59.761 "nvme_io_md": false, 00:17:59.761 "write_zeroes": true, 00:17:59.761 "zcopy": false, 00:17:59.761 "get_zone_info": false, 00:17:59.761 "zone_management": false, 00:17:59.761 "zone_append": false, 00:17:59.761 "compare": false, 00:17:59.761 "compare_and_write": false, 00:17:59.761 "abort": false, 00:17:59.761 "seek_hole": false, 00:17:59.761 "seek_data": false, 00:17:59.761 "copy": false, 00:17:59.761 "nvme_iov_md": false 00:17:59.761 }, 00:17:59.761 "memory_domains": [ 00:17:59.761 { 00:17:59.761 "dma_device_id": "system", 00:17:59.761 "dma_device_type": 1 00:17:59.761 }, 00:17:59.761 { 00:17:59.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.761 "dma_device_type": 2 00:17:59.761 }, 00:17:59.761 { 00:17:59.761 "dma_device_id": "system", 00:17:59.761 "dma_device_type": 1 00:17:59.761 }, 00:17:59.761 { 00:17:59.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.761 "dma_device_type": 2 00:17:59.761 }, 00:17:59.761 { 00:17:59.761 "dma_device_id": "system", 00:17:59.761 "dma_device_type": 1 00:17:59.761 }, 00:17:59.761 { 00:17:59.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.761 "dma_device_type": 2 00:17:59.761 } 00:17:59.761 ], 00:17:59.761 "driver_specific": { 00:17:59.761 "raid": { 00:17:59.761 "uuid": "4ecb5b45-3dcb-4ff0-9d0b-08b9ca3230c6", 00:17:59.761 "strip_size_kb": 64, 00:17:59.761 "state": "online", 00:17:59.761 "raid_level": "concat", 00:17:59.761 "superblock": true, 00:17:59.761 "num_base_bdevs": 3, 00:17:59.761 "num_base_bdevs_discovered": 3, 00:17:59.761 "num_base_bdevs_operational": 3, 00:17:59.761 "base_bdevs_list": [ 00:17:59.761 { 00:17:59.761 "name": "NewBaseBdev", 00:17:59.761 "uuid": "d66024dd-974e-4651-b2a4-1d012404fc03", 00:17:59.761 "is_configured": true, 00:17:59.761 "data_offset": 2048, 00:17:59.761 "data_size": 63488 00:17:59.761 }, 00:17:59.761 { 00:17:59.761 "name": "BaseBdev2", 00:17:59.761 "uuid": "f5fefb4f-39a2-4fd3-b735-769e77c5c467", 00:17:59.761 "is_configured": true, 00:17:59.761 "data_offset": 2048, 00:17:59.761 "data_size": 63488 00:17:59.761 }, 00:17:59.761 { 00:17:59.761 "name": "BaseBdev3", 00:17:59.761 "uuid": "e3298ab2-32ed-40e2-a91b-6e489c5b4787", 00:17:59.761 "is_configured": true, 00:17:59.761 "data_offset": 2048, 00:17:59.761 "data_size": 63488 00:17:59.761 } 00:17:59.761 ] 00:17:59.761 } 00:17:59.761 } 00:17:59.761 }' 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:59.761 BaseBdev2 00:17:59.761 BaseBdev3' 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:59.761 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.762 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.762 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.762 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.021 [2024-11-20 05:28:31.635691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:00.021 [2024-11-20 05:28:31.635721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.021 [2024-11-20 05:28:31.635811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.021 [2024-11-20 05:28:31.635872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.021 [2024-11-20 05:28:31.635884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.021 05:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64750 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64750 ']' 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64750 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64750 00:18:00.022 killing process with pid 64750 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64750' 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64750 00:18:00.022 [2024-11-20 05:28:31.665741] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:00.022 05:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64750 00:18:00.022 [2024-11-20 05:28:31.823100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.959 ************************************ 00:18:00.959 END TEST raid_state_function_test_sb 00:18:00.959 ************************************ 00:18:00.959 05:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:00.959 00:18:00.959 real 0m7.840s 00:18:00.959 user 0m12.532s 00:18:00.959 sys 0m1.330s 00:18:00.959 05:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:00.959 05:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.959 05:28:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:18:00.959 05:28:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:00.959 05:28:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:00.959 05:28:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.959 ************************************ 00:18:00.959 START TEST raid_superblock_test 00:18:00.959 ************************************ 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65342 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65342 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65342 ']' 00:18:00.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.959 05:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.959 [2024-11-20 05:28:32.597973] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:00.959 [2024-11-20 05:28:32.598147] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65342 ] 00:18:00.959 [2024-11-20 05:28:32.769985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.216 [2024-11-20 05:28:32.876572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.216 [2024-11-20 05:28:32.997753] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.216 [2024-11-20 05:28:32.997809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.782 malloc1 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.782 [2024-11-20 05:28:33.505220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.782 [2024-11-20 05:28:33.505292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.782 [2024-11-20 05:28:33.505313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:01.782 [2024-11-20 05:28:33.505322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.782 [2024-11-20 05:28:33.507318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.782 [2024-11-20 05:28:33.507355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.782 pt1 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:01.782 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.783 malloc2 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.783 [2024-11-20 05:28:33.543505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.783 [2024-11-20 05:28:33.543551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.783 [2024-11-20 05:28:33.543572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:01.783 [2024-11-20 05:28:33.543579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.783 [2024-11-20 05:28:33.545451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.783 [2024-11-20 05:28:33.545603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.783 pt2 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.783 malloc3 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.783 [2024-11-20 05:28:33.592227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:01.783 [2024-11-20 05:28:33.592275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.783 [2024-11-20 05:28:33.592294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:01.783 [2024-11-20 05:28:33.592302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.783 [2024-11-20 05:28:33.594210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.783 [2024-11-20 05:28:33.594241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:01.783 pt3 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.783 [2024-11-20 05:28:33.600271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.783 [2024-11-20 05:28:33.601874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.783 [2024-11-20 05:28:33.601925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.783 [2024-11-20 05:28:33.602056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:01.783 [2024-11-20 05:28:33.602066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:01.783 [2024-11-20 05:28:33.602269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:01.783 [2024-11-20 05:28:33.602412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:01.783 [2024-11-20 05:28:33.602419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:01.783 [2024-11-20 05:28:33.602531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.783 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.041 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.041 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.041 "name": "raid_bdev1", 00:18:02.041 "uuid": "a88685c9-4c5c-4771-b5dc-cf5e73165cc5", 00:18:02.041 "strip_size_kb": 64, 00:18:02.041 "state": "online", 00:18:02.041 "raid_level": "concat", 00:18:02.041 "superblock": true, 00:18:02.041 "num_base_bdevs": 3, 00:18:02.041 "num_base_bdevs_discovered": 3, 00:18:02.041 "num_base_bdevs_operational": 3, 00:18:02.041 "base_bdevs_list": [ 00:18:02.041 { 00:18:02.041 "name": "pt1", 00:18:02.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.041 "is_configured": true, 00:18:02.041 "data_offset": 2048, 00:18:02.041 "data_size": 63488 00:18:02.041 }, 00:18:02.041 { 00:18:02.041 "name": "pt2", 00:18:02.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.041 "is_configured": true, 00:18:02.041 "data_offset": 2048, 00:18:02.041 "data_size": 63488 00:18:02.041 }, 00:18:02.041 { 00:18:02.041 "name": "pt3", 00:18:02.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.041 "is_configured": true, 00:18:02.041 "data_offset": 2048, 00:18:02.041 "data_size": 63488 00:18:02.041 } 00:18:02.041 ] 00:18:02.041 }' 00:18:02.041 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.041 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.299 [2024-11-20 05:28:33.924645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.299 "name": "raid_bdev1", 00:18:02.299 "aliases": [ 00:18:02.299 "a88685c9-4c5c-4771-b5dc-cf5e73165cc5" 00:18:02.299 ], 00:18:02.299 "product_name": "Raid Volume", 00:18:02.299 "block_size": 512, 00:18:02.299 "num_blocks": 190464, 00:18:02.299 "uuid": "a88685c9-4c5c-4771-b5dc-cf5e73165cc5", 00:18:02.299 "assigned_rate_limits": { 00:18:02.299 "rw_ios_per_sec": 0, 00:18:02.299 "rw_mbytes_per_sec": 0, 00:18:02.299 "r_mbytes_per_sec": 0, 00:18:02.299 "w_mbytes_per_sec": 0 00:18:02.299 }, 00:18:02.299 "claimed": false, 00:18:02.299 "zoned": false, 00:18:02.299 "supported_io_types": { 00:18:02.299 "read": true, 00:18:02.299 "write": true, 00:18:02.299 "unmap": true, 00:18:02.299 "flush": true, 00:18:02.299 "reset": true, 00:18:02.299 "nvme_admin": false, 00:18:02.299 "nvme_io": false, 00:18:02.299 "nvme_io_md": false, 00:18:02.299 "write_zeroes": true, 00:18:02.299 "zcopy": false, 00:18:02.299 "get_zone_info": false, 00:18:02.299 "zone_management": false, 00:18:02.299 "zone_append": false, 00:18:02.299 "compare": false, 00:18:02.299 "compare_and_write": false, 00:18:02.299 "abort": false, 00:18:02.299 "seek_hole": false, 00:18:02.299 "seek_data": false, 00:18:02.299 "copy": false, 00:18:02.299 "nvme_iov_md": false 00:18:02.299 }, 00:18:02.299 "memory_domains": [ 00:18:02.299 { 00:18:02.299 "dma_device_id": "system", 00:18:02.299 "dma_device_type": 1 00:18:02.299 }, 00:18:02.299 { 00:18:02.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.299 "dma_device_type": 2 00:18:02.299 }, 00:18:02.299 { 00:18:02.299 "dma_device_id": "system", 00:18:02.299 "dma_device_type": 1 00:18:02.299 }, 00:18:02.299 { 00:18:02.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.299 "dma_device_type": 2 00:18:02.299 }, 00:18:02.299 { 00:18:02.299 "dma_device_id": "system", 00:18:02.299 "dma_device_type": 1 00:18:02.299 }, 00:18:02.299 { 00:18:02.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.299 "dma_device_type": 2 00:18:02.299 } 00:18:02.299 ], 00:18:02.299 "driver_specific": { 00:18:02.299 "raid": { 00:18:02.299 "uuid": "a88685c9-4c5c-4771-b5dc-cf5e73165cc5", 00:18:02.299 "strip_size_kb": 64, 00:18:02.299 "state": "online", 00:18:02.299 "raid_level": "concat", 00:18:02.299 "superblock": true, 00:18:02.299 "num_base_bdevs": 3, 00:18:02.299 "num_base_bdevs_discovered": 3, 00:18:02.299 "num_base_bdevs_operational": 3, 00:18:02.299 "base_bdevs_list": [ 00:18:02.299 { 00:18:02.299 "name": "pt1", 00:18:02.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.299 "is_configured": true, 00:18:02.299 "data_offset": 2048, 00:18:02.299 "data_size": 63488 00:18:02.299 }, 00:18:02.299 { 00:18:02.299 "name": "pt2", 00:18:02.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.299 "is_configured": true, 00:18:02.299 "data_offset": 2048, 00:18:02.299 "data_size": 63488 00:18:02.299 }, 00:18:02.299 { 00:18:02.299 "name": "pt3", 00:18:02.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.299 "is_configured": true, 00:18:02.299 "data_offset": 2048, 00:18:02.299 "data_size": 63488 00:18:02.299 } 00:18:02.299 ] 00:18:02.299 } 00:18:02.299 } 00:18:02.299 }' 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.299 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:02.299 pt2 00:18:02.299 pt3' 00:18:02.300 05:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.300 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.300 [2024-11-20 05:28:34.128615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.558 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.558 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a88685c9-4c5c-4771-b5dc-cf5e73165cc5 00:18:02.558 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a88685c9-4c5c-4771-b5dc-cf5e73165cc5 ']' 00:18:02.558 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.558 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.558 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.558 [2024-11-20 05:28:34.164384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.559 [2024-11-20 05:28:34.164504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.559 [2024-11-20 05:28:34.164634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.559 [2024-11-20 05:28:34.164746] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.559 [2024-11-20 05:28:34.164816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 [2024-11-20 05:28:34.268445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:02.559 [2024-11-20 05:28:34.270175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:02.559 [2024-11-20 05:28:34.270217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:02.559 [2024-11-20 05:28:34.270263] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:02.559 [2024-11-20 05:28:34.270319] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:02.559 [2024-11-20 05:28:34.270335] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:02.559 [2024-11-20 05:28:34.270349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.559 [2024-11-20 05:28:34.270358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:02.559 request: 00:18:02.559 { 00:18:02.559 "name": "raid_bdev1", 00:18:02.559 "raid_level": "concat", 00:18:02.559 "base_bdevs": [ 00:18:02.559 "malloc1", 00:18:02.559 "malloc2", 00:18:02.559 "malloc3" 00:18:02.559 ], 00:18:02.559 "strip_size_kb": 64, 00:18:02.559 "superblock": false, 00:18:02.559 "method": "bdev_raid_create", 00:18:02.559 "req_id": 1 00:18:02.559 } 00:18:02.559 Got JSON-RPC error response 00:18:02.559 response: 00:18:02.559 { 00:18:02.559 "code": -17, 00:18:02.559 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:02.559 } 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 [2024-11-20 05:28:34.312397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.559 [2024-11-20 05:28:34.312456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.559 [2024-11-20 05:28:34.312474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:02.559 [2024-11-20 05:28:34.312482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.559 [2024-11-20 05:28:34.314617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.559 [2024-11-20 05:28:34.314651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.559 [2024-11-20 05:28:34.314732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:02.559 [2024-11-20 05:28:34.314784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:02.559 pt1 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.559 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.560 "name": "raid_bdev1", 00:18:02.560 "uuid": "a88685c9-4c5c-4771-b5dc-cf5e73165cc5", 00:18:02.560 "strip_size_kb": 64, 00:18:02.560 "state": "configuring", 00:18:02.560 "raid_level": "concat", 00:18:02.560 "superblock": true, 00:18:02.560 "num_base_bdevs": 3, 00:18:02.560 "num_base_bdevs_discovered": 1, 00:18:02.560 "num_base_bdevs_operational": 3, 00:18:02.560 "base_bdevs_list": [ 00:18:02.560 { 00:18:02.560 "name": "pt1", 00:18:02.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.560 "is_configured": true, 00:18:02.560 "data_offset": 2048, 00:18:02.560 "data_size": 63488 00:18:02.560 }, 00:18:02.560 { 00:18:02.560 "name": null, 00:18:02.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.560 "is_configured": false, 00:18:02.560 "data_offset": 2048, 00:18:02.560 "data_size": 63488 00:18:02.560 }, 00:18:02.560 { 00:18:02.560 "name": null, 00:18:02.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.560 "is_configured": false, 00:18:02.560 "data_offset": 2048, 00:18:02.560 "data_size": 63488 00:18:02.560 } 00:18:02.560 ] 00:18:02.560 }' 00:18:02.560 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.560 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.817 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:02.817 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.817 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.817 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.817 [2024-11-20 05:28:34.628464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.817 [2024-11-20 05:28:34.628529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.817 [2024-11-20 05:28:34.628551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:02.817 [2024-11-20 05:28:34.628560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.817 [2024-11-20 05:28:34.628962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.817 [2024-11-20 05:28:34.628979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.817 [2024-11-20 05:28:34.629058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.817 [2024-11-20 05:28:34.629083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.817 pt2 00:18:02.817 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.817 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.817 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.817 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.817 [2024-11-20 05:28:34.636461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:02.817 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.818 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.075 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.075 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.075 "name": "raid_bdev1", 00:18:03.075 "uuid": "a88685c9-4c5c-4771-b5dc-cf5e73165cc5", 00:18:03.075 "strip_size_kb": 64, 00:18:03.075 "state": "configuring", 00:18:03.075 "raid_level": "concat", 00:18:03.076 "superblock": true, 00:18:03.076 "num_base_bdevs": 3, 00:18:03.076 "num_base_bdevs_discovered": 1, 00:18:03.076 "num_base_bdevs_operational": 3, 00:18:03.076 "base_bdevs_list": [ 00:18:03.076 { 00:18:03.076 "name": "pt1", 00:18:03.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.076 "is_configured": true, 00:18:03.076 "data_offset": 2048, 00:18:03.076 "data_size": 63488 00:18:03.076 }, 00:18:03.076 { 00:18:03.076 "name": null, 00:18:03.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.076 "is_configured": false, 00:18:03.076 "data_offset": 0, 00:18:03.076 "data_size": 63488 00:18:03.076 }, 00:18:03.076 { 00:18:03.076 "name": null, 00:18:03.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.076 "is_configured": false, 00:18:03.076 "data_offset": 2048, 00:18:03.076 "data_size": 63488 00:18:03.076 } 00:18:03.076 ] 00:18:03.076 }' 00:18:03.076 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.076 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.334 [2024-11-20 05:28:34.968541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.334 [2024-11-20 05:28:34.968621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.334 [2024-11-20 05:28:34.968640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:03.334 [2024-11-20 05:28:34.968651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.334 [2024-11-20 05:28:34.969099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.334 [2024-11-20 05:28:34.969121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.334 [2024-11-20 05:28:34.969198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:03.334 [2024-11-20 05:28:34.969220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.334 pt2 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.334 [2024-11-20 05:28:34.976520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:03.334 [2024-11-20 05:28:34.976569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.334 [2024-11-20 05:28:34.976583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:03.334 [2024-11-20 05:28:34.976593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.334 [2024-11-20 05:28:34.976961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.334 [2024-11-20 05:28:34.976982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:03.334 [2024-11-20 05:28:34.977044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:03.334 [2024-11-20 05:28:34.977064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:03.334 [2024-11-20 05:28:34.977169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:03.334 [2024-11-20 05:28:34.977183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:03.334 [2024-11-20 05:28:34.977408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:03.334 [2024-11-20 05:28:34.977523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:03.334 [2024-11-20 05:28:34.977596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:03.334 [2024-11-20 05:28:34.977715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.334 pt3 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.334 05:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.334 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.334 "name": "raid_bdev1", 00:18:03.335 "uuid": "a88685c9-4c5c-4771-b5dc-cf5e73165cc5", 00:18:03.335 "strip_size_kb": 64, 00:18:03.335 "state": "online", 00:18:03.335 "raid_level": "concat", 00:18:03.335 "superblock": true, 00:18:03.335 "num_base_bdevs": 3, 00:18:03.335 "num_base_bdevs_discovered": 3, 00:18:03.335 "num_base_bdevs_operational": 3, 00:18:03.335 "base_bdevs_list": [ 00:18:03.335 { 00:18:03.335 "name": "pt1", 00:18:03.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.335 "is_configured": true, 00:18:03.335 "data_offset": 2048, 00:18:03.335 "data_size": 63488 00:18:03.335 }, 00:18:03.335 { 00:18:03.335 "name": "pt2", 00:18:03.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.335 "is_configured": true, 00:18:03.335 "data_offset": 2048, 00:18:03.335 "data_size": 63488 00:18:03.335 }, 00:18:03.335 { 00:18:03.335 "name": "pt3", 00:18:03.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.335 "is_configured": true, 00:18:03.335 "data_offset": 2048, 00:18:03.335 "data_size": 63488 00:18:03.335 } 00:18:03.335 ] 00:18:03.335 }' 00:18:03.335 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.335 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.594 [2024-11-20 05:28:35.288892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:03.594 "name": "raid_bdev1", 00:18:03.594 "aliases": [ 00:18:03.594 "a88685c9-4c5c-4771-b5dc-cf5e73165cc5" 00:18:03.594 ], 00:18:03.594 "product_name": "Raid Volume", 00:18:03.594 "block_size": 512, 00:18:03.594 "num_blocks": 190464, 00:18:03.594 "uuid": "a88685c9-4c5c-4771-b5dc-cf5e73165cc5", 00:18:03.594 "assigned_rate_limits": { 00:18:03.594 "rw_ios_per_sec": 0, 00:18:03.594 "rw_mbytes_per_sec": 0, 00:18:03.594 "r_mbytes_per_sec": 0, 00:18:03.594 "w_mbytes_per_sec": 0 00:18:03.594 }, 00:18:03.594 "claimed": false, 00:18:03.594 "zoned": false, 00:18:03.594 "supported_io_types": { 00:18:03.594 "read": true, 00:18:03.594 "write": true, 00:18:03.594 "unmap": true, 00:18:03.594 "flush": true, 00:18:03.594 "reset": true, 00:18:03.594 "nvme_admin": false, 00:18:03.594 "nvme_io": false, 00:18:03.594 "nvme_io_md": false, 00:18:03.594 "write_zeroes": true, 00:18:03.594 "zcopy": false, 00:18:03.594 "get_zone_info": false, 00:18:03.594 "zone_management": false, 00:18:03.594 "zone_append": false, 00:18:03.594 "compare": false, 00:18:03.594 "compare_and_write": false, 00:18:03.594 "abort": false, 00:18:03.594 "seek_hole": false, 00:18:03.594 "seek_data": false, 00:18:03.594 "copy": false, 00:18:03.594 "nvme_iov_md": false 00:18:03.594 }, 00:18:03.594 "memory_domains": [ 00:18:03.594 { 00:18:03.594 "dma_device_id": "system", 00:18:03.594 "dma_device_type": 1 00:18:03.594 }, 00:18:03.594 { 00:18:03.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.594 "dma_device_type": 2 00:18:03.594 }, 00:18:03.594 { 00:18:03.594 "dma_device_id": "system", 00:18:03.594 "dma_device_type": 1 00:18:03.594 }, 00:18:03.594 { 00:18:03.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.594 "dma_device_type": 2 00:18:03.594 }, 00:18:03.594 { 00:18:03.594 "dma_device_id": "system", 00:18:03.594 "dma_device_type": 1 00:18:03.594 }, 00:18:03.594 { 00:18:03.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.594 "dma_device_type": 2 00:18:03.594 } 00:18:03.594 ], 00:18:03.594 "driver_specific": { 00:18:03.594 "raid": { 00:18:03.594 "uuid": "a88685c9-4c5c-4771-b5dc-cf5e73165cc5", 00:18:03.594 "strip_size_kb": 64, 00:18:03.594 "state": "online", 00:18:03.594 "raid_level": "concat", 00:18:03.594 "superblock": true, 00:18:03.594 "num_base_bdevs": 3, 00:18:03.594 "num_base_bdevs_discovered": 3, 00:18:03.594 "num_base_bdevs_operational": 3, 00:18:03.594 "base_bdevs_list": [ 00:18:03.594 { 00:18:03.594 "name": "pt1", 00:18:03.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.594 "is_configured": true, 00:18:03.594 "data_offset": 2048, 00:18:03.594 "data_size": 63488 00:18:03.594 }, 00:18:03.594 { 00:18:03.594 "name": "pt2", 00:18:03.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.594 "is_configured": true, 00:18:03.594 "data_offset": 2048, 00:18:03.594 "data_size": 63488 00:18:03.594 }, 00:18:03.594 { 00:18:03.594 "name": "pt3", 00:18:03.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.594 "is_configured": true, 00:18:03.594 "data_offset": 2048, 00:18:03.594 "data_size": 63488 00:18:03.594 } 00:18:03.594 ] 00:18:03.594 } 00:18:03.594 } 00:18:03.594 }' 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:03.594 pt2 00:18:03.594 pt3' 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.594 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.595 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:03.853 [2024-11-20 05:28:35.508889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a88685c9-4c5c-4771-b5dc-cf5e73165cc5 '!=' a88685c9-4c5c-4771-b5dc-cf5e73165cc5 ']' 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65342 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65342 ']' 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65342 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65342 00:18:03.853 killing process with pid 65342 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65342' 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65342 00:18:03.853 [2024-11-20 05:28:35.560423] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.853 05:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65342 00:18:03.853 [2024-11-20 05:28:35.560539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.853 [2024-11-20 05:28:35.560610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.853 [2024-11-20 05:28:35.560621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:04.111 [2024-11-20 05:28:35.716540] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.677 05:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:04.677 00:18:04.677 real 0m3.813s 00:18:04.677 user 0m5.477s 00:18:04.677 sys 0m0.717s 00:18:04.677 05:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:04.677 ************************************ 00:18:04.677 END TEST raid_superblock_test 00:18:04.677 ************************************ 00:18:04.677 05:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.677 05:28:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:18:04.677 05:28:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:04.677 05:28:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:04.677 05:28:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.677 ************************************ 00:18:04.677 START TEST raid_read_error_test 00:18:04.677 ************************************ 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:04.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Vvypp1ZzMK 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65579 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65579 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65579 ']' 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.677 05:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:04.677 [2024-11-20 05:28:36.436497] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:04.677 [2024-11-20 05:28:36.436620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65579 ] 00:18:04.935 [2024-11-20 05:28:36.593859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.935 [2024-11-20 05:28:36.698923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.194 [2024-11-20 05:28:36.823570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.194 [2024-11-20 05:28:36.823618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.486 BaseBdev1_malloc 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.486 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.744 true 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.744 [2024-11-20 05:28:37.328290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:05.744 [2024-11-20 05:28:37.328488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.744 [2024-11-20 05:28:37.328513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:05.744 [2024-11-20 05:28:37.328523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.744 [2024-11-20 05:28:37.330426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.744 [2024-11-20 05:28:37.330458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:05.744 BaseBdev1 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.744 BaseBdev2_malloc 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.744 true 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.744 [2024-11-20 05:28:37.370220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:05.744 [2024-11-20 05:28:37.370263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.744 [2024-11-20 05:28:37.370276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:05.744 [2024-11-20 05:28:37.370284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.744 [2024-11-20 05:28:37.372163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.744 [2024-11-20 05:28:37.372306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:05.744 BaseBdev2 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.744 BaseBdev3_malloc 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.744 true 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.744 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.745 [2024-11-20 05:28:37.428433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:05.745 [2024-11-20 05:28:37.428489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.745 [2024-11-20 05:28:37.428506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:05.745 [2024-11-20 05:28:37.428516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.745 [2024-11-20 05:28:37.430432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.745 [2024-11-20 05:28:37.430464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:05.745 BaseBdev3 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.745 [2024-11-20 05:28:37.436494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.745 [2024-11-20 05:28:37.438136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.745 [2024-11-20 05:28:37.438328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:05.745 [2024-11-20 05:28:37.438522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:05.745 [2024-11-20 05:28:37.438532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:05.745 [2024-11-20 05:28:37.438758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:18:05.745 [2024-11-20 05:28:37.438885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:05.745 [2024-11-20 05:28:37.438896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:05.745 [2024-11-20 05:28:37.439015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.745 "name": "raid_bdev1", 00:18:05.745 "uuid": "45d95c59-174a-446a-a6be-0d663970c485", 00:18:05.745 "strip_size_kb": 64, 00:18:05.745 "state": "online", 00:18:05.745 "raid_level": "concat", 00:18:05.745 "superblock": true, 00:18:05.745 "num_base_bdevs": 3, 00:18:05.745 "num_base_bdevs_discovered": 3, 00:18:05.745 "num_base_bdevs_operational": 3, 00:18:05.745 "base_bdevs_list": [ 00:18:05.745 { 00:18:05.745 "name": "BaseBdev1", 00:18:05.745 "uuid": "74335720-e776-5a07-81d3-6c87b3523ea6", 00:18:05.745 "is_configured": true, 00:18:05.745 "data_offset": 2048, 00:18:05.745 "data_size": 63488 00:18:05.745 }, 00:18:05.745 { 00:18:05.745 "name": "BaseBdev2", 00:18:05.745 "uuid": "7e4674b1-e74d-5c77-96bf-d2fab2a9fa13", 00:18:05.745 "is_configured": true, 00:18:05.745 "data_offset": 2048, 00:18:05.745 "data_size": 63488 00:18:05.745 }, 00:18:05.745 { 00:18:05.745 "name": "BaseBdev3", 00:18:05.745 "uuid": "34fc950e-ea03-5f33-8912-fb54075f350e", 00:18:05.745 "is_configured": true, 00:18:05.745 "data_offset": 2048, 00:18:05.745 "data_size": 63488 00:18:05.745 } 00:18:05.745 ] 00:18:05.745 }' 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.745 05:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.002 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:06.002 05:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:06.002 [2024-11-20 05:28:37.825455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:18:06.936 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:06.936 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.936 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.936 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.936 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:06.936 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:06.936 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.937 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.195 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.195 "name": "raid_bdev1", 00:18:07.195 "uuid": "45d95c59-174a-446a-a6be-0d663970c485", 00:18:07.195 "strip_size_kb": 64, 00:18:07.195 "state": "online", 00:18:07.195 "raid_level": "concat", 00:18:07.195 "superblock": true, 00:18:07.195 "num_base_bdevs": 3, 00:18:07.195 "num_base_bdevs_discovered": 3, 00:18:07.195 "num_base_bdevs_operational": 3, 00:18:07.195 "base_bdevs_list": [ 00:18:07.195 { 00:18:07.195 "name": "BaseBdev1", 00:18:07.196 "uuid": "74335720-e776-5a07-81d3-6c87b3523ea6", 00:18:07.196 "is_configured": true, 00:18:07.196 "data_offset": 2048, 00:18:07.196 "data_size": 63488 00:18:07.196 }, 00:18:07.196 { 00:18:07.196 "name": "BaseBdev2", 00:18:07.196 "uuid": "7e4674b1-e74d-5c77-96bf-d2fab2a9fa13", 00:18:07.196 "is_configured": true, 00:18:07.196 "data_offset": 2048, 00:18:07.196 "data_size": 63488 00:18:07.196 }, 00:18:07.196 { 00:18:07.196 "name": "BaseBdev3", 00:18:07.196 "uuid": "34fc950e-ea03-5f33-8912-fb54075f350e", 00:18:07.196 "is_configured": true, 00:18:07.196 "data_offset": 2048, 00:18:07.196 "data_size": 63488 00:18:07.196 } 00:18:07.196 ] 00:18:07.196 }' 00:18:07.196 05:28:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.196 05:28:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.461 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.461 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.461 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.461 [2024-11-20 05:28:39.066378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.461 [2024-11-20 05:28:39.066524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.462 [2024-11-20 05:28:39.069031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.462 [2024-11-20 05:28:39.069176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.462 [2024-11-20 05:28:39.069270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.462 [2024-11-20 05:28:39.069324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.462 { 00:18:07.462 "results": [ 00:18:07.462 { 00:18:07.462 "job": "raid_bdev1", 00:18:07.462 "core_mask": "0x1", 00:18:07.462 "workload": "randrw", 00:18:07.462 "percentage": 50, 00:18:07.462 "status": "finished", 00:18:07.462 "queue_depth": 1, 00:18:07.462 "io_size": 131072, 00:18:07.462 "runtime": 1.239411, 00:18:07.462 "iops": 16743.437003544426, 00:18:07.462 "mibps": 2092.9296254430533, 00:18:07.462 "io_failed": 1, 00:18:07.462 "io_timeout": 0, 00:18:07.462 "avg_latency_us": 82.45413474974887, 00:18:07.462 "min_latency_us": 25.403076923076924, 00:18:07.462 "max_latency_us": 1329.6246153846155 00:18:07.462 } 00:18:07.462 ], 00:18:07.462 "core_count": 1 00:18:07.462 } 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65579 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65579 ']' 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65579 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65579 00:18:07.462 killing process with pid 65579 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65579' 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65579 00:18:07.462 [2024-11-20 05:28:39.100596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.462 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65579 00:18:07.462 [2024-11-20 05:28:39.220679] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.032 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Vvypp1ZzMK 00:18:08.291 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:08.291 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:08.291 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:18:08.291 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:08.291 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:08.291 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:08.291 05:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:18:08.291 00:18:08.291 real 0m3.508s 00:18:08.291 user 0m4.132s 00:18:08.291 sys 0m0.449s 00:18:08.291 ************************************ 00:18:08.291 END TEST raid_read_error_test 00:18:08.291 ************************************ 00:18:08.291 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:08.291 05:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.291 05:28:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:18:08.291 05:28:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:08.291 05:28:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:08.291 05:28:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.291 ************************************ 00:18:08.291 START TEST raid_write_error_test 00:18:08.291 ************************************ 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:08.291 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kO5yGLwP4h 00:18:08.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65719 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65719 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65719 ']' 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.292 05:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:08.292 [2024-11-20 05:28:40.000099] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:08.292 [2024-11-20 05:28:40.000263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65719 ] 00:18:08.550 [2024-11-20 05:28:40.163485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.550 [2024-11-20 05:28:40.267378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.808 [2024-11-20 05:28:40.389041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.808 [2024-11-20 05:28:40.389107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.066 BaseBdev1_malloc 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.066 true 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.066 [2024-11-20 05:28:40.886601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:09.066 [2024-11-20 05:28:40.886783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.066 [2024-11-20 05:28:40.886811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:09.066 [2024-11-20 05:28:40.886822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.066 [2024-11-20 05:28:40.888793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.066 [2024-11-20 05:28:40.888829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:09.066 BaseBdev1 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.066 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.324 BaseBdev2_malloc 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.324 true 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.324 [2024-11-20 05:28:40.928841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:09.324 [2024-11-20 05:28:40.928885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.324 [2024-11-20 05:28:40.928898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:09.324 [2024-11-20 05:28:40.928907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.324 [2024-11-20 05:28:40.930789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.324 [2024-11-20 05:28:40.930822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:09.324 BaseBdev2 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.324 BaseBdev3_malloc 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.324 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.324 true 00:18:09.325 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.325 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:09.325 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.325 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.325 [2024-11-20 05:28:40.989997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:09.325 [2024-11-20 05:28:40.990045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.325 [2024-11-20 05:28:40.990061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:09.325 [2024-11-20 05:28:40.990070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.325 [2024-11-20 05:28:40.991953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.325 [2024-11-20 05:28:40.992098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:09.325 BaseBdev3 00:18:09.325 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.325 05:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:18:09.325 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.325 05:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.325 [2024-11-20 05:28:40.998062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.325 [2024-11-20 05:28:40.999658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.325 [2024-11-20 05:28:40.999725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:09.325 [2024-11-20 05:28:40.999900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:09.325 [2024-11-20 05:28:40.999908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:09.325 [2024-11-20 05:28:41.000117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:18:09.325 [2024-11-20 05:28:41.000237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:09.325 [2024-11-20 05:28:41.000247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:09.325 [2024-11-20 05:28:41.000355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.325 "name": "raid_bdev1", 00:18:09.325 "uuid": "1306da75-d8df-46a4-b718-5d13c293dd0f", 00:18:09.325 "strip_size_kb": 64, 00:18:09.325 "state": "online", 00:18:09.325 "raid_level": "concat", 00:18:09.325 "superblock": true, 00:18:09.325 "num_base_bdevs": 3, 00:18:09.325 "num_base_bdevs_discovered": 3, 00:18:09.325 "num_base_bdevs_operational": 3, 00:18:09.325 "base_bdevs_list": [ 00:18:09.325 { 00:18:09.325 "name": "BaseBdev1", 00:18:09.325 "uuid": "04c7dcdf-f270-5a69-8b24-ea1a22478c35", 00:18:09.325 "is_configured": true, 00:18:09.325 "data_offset": 2048, 00:18:09.325 "data_size": 63488 00:18:09.325 }, 00:18:09.325 { 00:18:09.325 "name": "BaseBdev2", 00:18:09.325 "uuid": "1efaf1a1-bfa9-50c1-8968-435fccb683b7", 00:18:09.325 "is_configured": true, 00:18:09.325 "data_offset": 2048, 00:18:09.325 "data_size": 63488 00:18:09.325 }, 00:18:09.325 { 00:18:09.325 "name": "BaseBdev3", 00:18:09.325 "uuid": "082a14cc-73bd-5676-b9e0-a0caab22bbb2", 00:18:09.325 "is_configured": true, 00:18:09.325 "data_offset": 2048, 00:18:09.325 "data_size": 63488 00:18:09.325 } 00:18:09.325 ] 00:18:09.325 }' 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.325 05:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.584 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:09.584 05:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:09.584 [2024-11-20 05:28:41.387053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.518 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.519 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.519 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.519 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.519 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.519 "name": "raid_bdev1", 00:18:10.519 "uuid": "1306da75-d8df-46a4-b718-5d13c293dd0f", 00:18:10.519 "strip_size_kb": 64, 00:18:10.519 "state": "online", 00:18:10.519 "raid_level": "concat", 00:18:10.519 "superblock": true, 00:18:10.519 "num_base_bdevs": 3, 00:18:10.519 "num_base_bdevs_discovered": 3, 00:18:10.519 "num_base_bdevs_operational": 3, 00:18:10.519 "base_bdevs_list": [ 00:18:10.519 { 00:18:10.519 "name": "BaseBdev1", 00:18:10.519 "uuid": "04c7dcdf-f270-5a69-8b24-ea1a22478c35", 00:18:10.519 "is_configured": true, 00:18:10.519 "data_offset": 2048, 00:18:10.519 "data_size": 63488 00:18:10.519 }, 00:18:10.519 { 00:18:10.519 "name": "BaseBdev2", 00:18:10.519 "uuid": "1efaf1a1-bfa9-50c1-8968-435fccb683b7", 00:18:10.519 "is_configured": true, 00:18:10.519 "data_offset": 2048, 00:18:10.519 "data_size": 63488 00:18:10.519 }, 00:18:10.519 { 00:18:10.519 "name": "BaseBdev3", 00:18:10.519 "uuid": "082a14cc-73bd-5676-b9e0-a0caab22bbb2", 00:18:10.519 "is_configured": true, 00:18:10.519 "data_offset": 2048, 00:18:10.519 "data_size": 63488 00:18:10.519 } 00:18:10.519 ] 00:18:10.519 }' 00:18:10.519 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.519 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.085 [2024-11-20 05:28:42.624506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.085 [2024-11-20 05:28:42.624538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.085 [2024-11-20 05:28:42.626957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.085 [2024-11-20 05:28:42.627008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.085 [2024-11-20 05:28:42.627042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.085 [2024-11-20 05:28:42.627050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:11.085 { 00:18:11.085 "results": [ 00:18:11.085 { 00:18:11.085 "job": "raid_bdev1", 00:18:11.085 "core_mask": "0x1", 00:18:11.085 "workload": "randrw", 00:18:11.085 "percentage": 50, 00:18:11.085 "status": "finished", 00:18:11.085 "queue_depth": 1, 00:18:11.085 "io_size": 131072, 00:18:11.085 "runtime": 1.235513, 00:18:11.085 "iops": 17039.07607609147, 00:18:11.085 "mibps": 2129.884509511434, 00:18:11.085 "io_failed": 1, 00:18:11.085 "io_timeout": 0, 00:18:11.085 "avg_latency_us": 81.13695179565127, 00:18:11.085 "min_latency_us": 25.6, 00:18:11.085 "max_latency_us": 1348.5292307692307 00:18:11.085 } 00:18:11.085 ], 00:18:11.085 "core_count": 1 00:18:11.085 } 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65719 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65719 ']' 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65719 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65719 00:18:11.085 killing process with pid 65719 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65719' 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65719 00:18:11.085 [2024-11-20 05:28:42.658304] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.085 05:28:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65719 00:18:11.085 [2024-11-20 05:28:42.778891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kO5yGLwP4h 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:18:11.654 00:18:11.654 real 0m3.513s 00:18:11.654 user 0m4.141s 00:18:11.654 sys 0m0.445s 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:11.654 05:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.654 ************************************ 00:18:11.654 END TEST raid_write_error_test 00:18:11.654 ************************************ 00:18:11.654 05:28:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:11.654 05:28:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:18:11.654 05:28:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:11.654 05:28:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:11.654 05:28:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.654 ************************************ 00:18:11.654 START TEST raid_state_function_test 00:18:11.654 ************************************ 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:11.654 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:11.917 Process raid pid: 65846 00:18:11.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65846 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65846' 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65846 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65846 ']' 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:11.917 05:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.917 [2024-11-20 05:28:43.550343] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:11.917 [2024-11-20 05:28:43.550474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.917 [2024-11-20 05:28:43.709325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.179 [2024-11-20 05:28:43.829198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.179 [2024-11-20 05:28:43.981490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.179 [2024-11-20 05:28:43.981694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.752 [2024-11-20 05:28:44.406805] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.752 [2024-11-20 05:28:44.406861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.752 [2024-11-20 05:28:44.406871] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.752 [2024-11-20 05:28:44.406882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.752 [2024-11-20 05:28:44.406893] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:12.752 [2024-11-20 05:28:44.406902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.752 "name": "Existed_Raid", 00:18:12.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.752 "strip_size_kb": 0, 00:18:12.752 "state": "configuring", 00:18:12.752 "raid_level": "raid1", 00:18:12.752 "superblock": false, 00:18:12.752 "num_base_bdevs": 3, 00:18:12.752 "num_base_bdevs_discovered": 0, 00:18:12.752 "num_base_bdevs_operational": 3, 00:18:12.752 "base_bdevs_list": [ 00:18:12.752 { 00:18:12.752 "name": "BaseBdev1", 00:18:12.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.752 "is_configured": false, 00:18:12.752 "data_offset": 0, 00:18:12.752 "data_size": 0 00:18:12.752 }, 00:18:12.752 { 00:18:12.752 "name": "BaseBdev2", 00:18:12.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.752 "is_configured": false, 00:18:12.752 "data_offset": 0, 00:18:12.752 "data_size": 0 00:18:12.752 }, 00:18:12.752 { 00:18:12.752 "name": "BaseBdev3", 00:18:12.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.752 "is_configured": false, 00:18:12.752 "data_offset": 0, 00:18:12.752 "data_size": 0 00:18:12.752 } 00:18:12.752 ] 00:18:12.752 }' 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.752 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.012 [2024-11-20 05:28:44.738867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.012 [2024-11-20 05:28:44.738910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.012 [2024-11-20 05:28:44.746833] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:13.012 [2024-11-20 05:28:44.746875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:13.012 [2024-11-20 05:28:44.746884] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.012 [2024-11-20 05:28:44.746894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.012 [2024-11-20 05:28:44.746901] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:13.012 [2024-11-20 05:28:44.746910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.012 [2024-11-20 05:28:44.781903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.012 BaseBdev1 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.012 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.013 [ 00:18:13.013 { 00:18:13.013 "name": "BaseBdev1", 00:18:13.013 "aliases": [ 00:18:13.013 "5311d781-6427-4c3f-851a-87b9c3364b9c" 00:18:13.013 ], 00:18:13.013 "product_name": "Malloc disk", 00:18:13.013 "block_size": 512, 00:18:13.013 "num_blocks": 65536, 00:18:13.013 "uuid": "5311d781-6427-4c3f-851a-87b9c3364b9c", 00:18:13.013 "assigned_rate_limits": { 00:18:13.013 "rw_ios_per_sec": 0, 00:18:13.013 "rw_mbytes_per_sec": 0, 00:18:13.013 "r_mbytes_per_sec": 0, 00:18:13.013 "w_mbytes_per_sec": 0 00:18:13.013 }, 00:18:13.013 "claimed": true, 00:18:13.013 "claim_type": "exclusive_write", 00:18:13.013 "zoned": false, 00:18:13.013 "supported_io_types": { 00:18:13.013 "read": true, 00:18:13.013 "write": true, 00:18:13.013 "unmap": true, 00:18:13.013 "flush": true, 00:18:13.013 "reset": true, 00:18:13.013 "nvme_admin": false, 00:18:13.013 "nvme_io": false, 00:18:13.013 "nvme_io_md": false, 00:18:13.013 "write_zeroes": true, 00:18:13.013 "zcopy": true, 00:18:13.013 "get_zone_info": false, 00:18:13.013 "zone_management": false, 00:18:13.013 "zone_append": false, 00:18:13.013 "compare": false, 00:18:13.013 "compare_and_write": false, 00:18:13.013 "abort": true, 00:18:13.013 "seek_hole": false, 00:18:13.013 "seek_data": false, 00:18:13.013 "copy": true, 00:18:13.013 "nvme_iov_md": false 00:18:13.013 }, 00:18:13.013 "memory_domains": [ 00:18:13.013 { 00:18:13.013 "dma_device_id": "system", 00:18:13.013 "dma_device_type": 1 00:18:13.013 }, 00:18:13.013 { 00:18:13.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.013 "dma_device_type": 2 00:18:13.013 } 00:18:13.013 ], 00:18:13.013 "driver_specific": {} 00:18:13.013 } 00:18:13.013 ] 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.013 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.280 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.280 "name": "Existed_Raid", 00:18:13.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.280 "strip_size_kb": 0, 00:18:13.280 "state": "configuring", 00:18:13.280 "raid_level": "raid1", 00:18:13.280 "superblock": false, 00:18:13.280 "num_base_bdevs": 3, 00:18:13.280 "num_base_bdevs_discovered": 1, 00:18:13.280 "num_base_bdevs_operational": 3, 00:18:13.280 "base_bdevs_list": [ 00:18:13.280 { 00:18:13.280 "name": "BaseBdev1", 00:18:13.280 "uuid": "5311d781-6427-4c3f-851a-87b9c3364b9c", 00:18:13.280 "is_configured": true, 00:18:13.280 "data_offset": 0, 00:18:13.280 "data_size": 65536 00:18:13.280 }, 00:18:13.280 { 00:18:13.280 "name": "BaseBdev2", 00:18:13.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.280 "is_configured": false, 00:18:13.280 "data_offset": 0, 00:18:13.280 "data_size": 0 00:18:13.280 }, 00:18:13.280 { 00:18:13.280 "name": "BaseBdev3", 00:18:13.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.280 "is_configured": false, 00:18:13.280 "data_offset": 0, 00:18:13.280 "data_size": 0 00:18:13.280 } 00:18:13.280 ] 00:18:13.280 }' 00:18:13.280 05:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.280 05:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.280 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:13.280 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.280 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.542 [2024-11-20 05:28:45.110036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.542 [2024-11-20 05:28:45.110099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.542 [2024-11-20 05:28:45.118083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.542 [2024-11-20 05:28:45.120120] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.542 [2024-11-20 05:28:45.120162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.542 [2024-11-20 05:28:45.120172] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:13.542 [2024-11-20 05:28:45.120182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.542 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.543 "name": "Existed_Raid", 00:18:13.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.543 "strip_size_kb": 0, 00:18:13.543 "state": "configuring", 00:18:13.543 "raid_level": "raid1", 00:18:13.543 "superblock": false, 00:18:13.543 "num_base_bdevs": 3, 00:18:13.543 "num_base_bdevs_discovered": 1, 00:18:13.543 "num_base_bdevs_operational": 3, 00:18:13.543 "base_bdevs_list": [ 00:18:13.543 { 00:18:13.543 "name": "BaseBdev1", 00:18:13.543 "uuid": "5311d781-6427-4c3f-851a-87b9c3364b9c", 00:18:13.543 "is_configured": true, 00:18:13.543 "data_offset": 0, 00:18:13.543 "data_size": 65536 00:18:13.543 }, 00:18:13.543 { 00:18:13.543 "name": "BaseBdev2", 00:18:13.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.543 "is_configured": false, 00:18:13.543 "data_offset": 0, 00:18:13.543 "data_size": 0 00:18:13.543 }, 00:18:13.543 { 00:18:13.543 "name": "BaseBdev3", 00:18:13.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.543 "is_configured": false, 00:18:13.543 "data_offset": 0, 00:18:13.543 "data_size": 0 00:18:13.543 } 00:18:13.543 ] 00:18:13.543 }' 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.543 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.801 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:13.801 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.802 [2024-11-20 05:28:45.442758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.802 BaseBdev2 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.802 [ 00:18:13.802 { 00:18:13.802 "name": "BaseBdev2", 00:18:13.802 "aliases": [ 00:18:13.802 "f9fcb999-a3da-4721-b950-cfbd5e246490" 00:18:13.802 ], 00:18:13.802 "product_name": "Malloc disk", 00:18:13.802 "block_size": 512, 00:18:13.802 "num_blocks": 65536, 00:18:13.802 "uuid": "f9fcb999-a3da-4721-b950-cfbd5e246490", 00:18:13.802 "assigned_rate_limits": { 00:18:13.802 "rw_ios_per_sec": 0, 00:18:13.802 "rw_mbytes_per_sec": 0, 00:18:13.802 "r_mbytes_per_sec": 0, 00:18:13.802 "w_mbytes_per_sec": 0 00:18:13.802 }, 00:18:13.802 "claimed": true, 00:18:13.802 "claim_type": "exclusive_write", 00:18:13.802 "zoned": false, 00:18:13.802 "supported_io_types": { 00:18:13.802 "read": true, 00:18:13.802 "write": true, 00:18:13.802 "unmap": true, 00:18:13.802 "flush": true, 00:18:13.802 "reset": true, 00:18:13.802 "nvme_admin": false, 00:18:13.802 "nvme_io": false, 00:18:13.802 "nvme_io_md": false, 00:18:13.802 "write_zeroes": true, 00:18:13.802 "zcopy": true, 00:18:13.802 "get_zone_info": false, 00:18:13.802 "zone_management": false, 00:18:13.802 "zone_append": false, 00:18:13.802 "compare": false, 00:18:13.802 "compare_and_write": false, 00:18:13.802 "abort": true, 00:18:13.802 "seek_hole": false, 00:18:13.802 "seek_data": false, 00:18:13.802 "copy": true, 00:18:13.802 "nvme_iov_md": false 00:18:13.802 }, 00:18:13.802 "memory_domains": [ 00:18:13.802 { 00:18:13.802 "dma_device_id": "system", 00:18:13.802 "dma_device_type": 1 00:18:13.802 }, 00:18:13.802 { 00:18:13.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.802 "dma_device_type": 2 00:18:13.802 } 00:18:13.802 ], 00:18:13.802 "driver_specific": {} 00:18:13.802 } 00:18:13.802 ] 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.802 "name": "Existed_Raid", 00:18:13.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.802 "strip_size_kb": 0, 00:18:13.802 "state": "configuring", 00:18:13.802 "raid_level": "raid1", 00:18:13.802 "superblock": false, 00:18:13.802 "num_base_bdevs": 3, 00:18:13.802 "num_base_bdevs_discovered": 2, 00:18:13.802 "num_base_bdevs_operational": 3, 00:18:13.802 "base_bdevs_list": [ 00:18:13.802 { 00:18:13.802 "name": "BaseBdev1", 00:18:13.802 "uuid": "5311d781-6427-4c3f-851a-87b9c3364b9c", 00:18:13.802 "is_configured": true, 00:18:13.802 "data_offset": 0, 00:18:13.802 "data_size": 65536 00:18:13.802 }, 00:18:13.802 { 00:18:13.802 "name": "BaseBdev2", 00:18:13.802 "uuid": "f9fcb999-a3da-4721-b950-cfbd5e246490", 00:18:13.802 "is_configured": true, 00:18:13.802 "data_offset": 0, 00:18:13.802 "data_size": 65536 00:18:13.802 }, 00:18:13.802 { 00:18:13.802 "name": "BaseBdev3", 00:18:13.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.802 "is_configured": false, 00:18:13.802 "data_offset": 0, 00:18:13.802 "data_size": 0 00:18:13.802 } 00:18:13.802 ] 00:18:13.802 }' 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.802 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.060 [2024-11-20 05:28:45.834999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:14.060 [2024-11-20 05:28:45.835223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:14.060 [2024-11-20 05:28:45.835245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:14.060 [2024-11-20 05:28:45.835572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:14.060 [2024-11-20 05:28:45.835739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:14.060 [2024-11-20 05:28:45.835748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:14.060 [2024-11-20 05:28:45.836027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.060 BaseBdev3 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.060 [ 00:18:14.060 { 00:18:14.060 "name": "BaseBdev3", 00:18:14.060 "aliases": [ 00:18:14.060 "61acea31-553d-470b-8aeb-ffbbab39e57f" 00:18:14.060 ], 00:18:14.060 "product_name": "Malloc disk", 00:18:14.060 "block_size": 512, 00:18:14.060 "num_blocks": 65536, 00:18:14.060 "uuid": "61acea31-553d-470b-8aeb-ffbbab39e57f", 00:18:14.060 "assigned_rate_limits": { 00:18:14.060 "rw_ios_per_sec": 0, 00:18:14.060 "rw_mbytes_per_sec": 0, 00:18:14.060 "r_mbytes_per_sec": 0, 00:18:14.060 "w_mbytes_per_sec": 0 00:18:14.060 }, 00:18:14.060 "claimed": true, 00:18:14.060 "claim_type": "exclusive_write", 00:18:14.060 "zoned": false, 00:18:14.060 "supported_io_types": { 00:18:14.060 "read": true, 00:18:14.060 "write": true, 00:18:14.060 "unmap": true, 00:18:14.060 "flush": true, 00:18:14.060 "reset": true, 00:18:14.060 "nvme_admin": false, 00:18:14.060 "nvme_io": false, 00:18:14.060 "nvme_io_md": false, 00:18:14.060 "write_zeroes": true, 00:18:14.060 "zcopy": true, 00:18:14.060 "get_zone_info": false, 00:18:14.060 "zone_management": false, 00:18:14.060 "zone_append": false, 00:18:14.060 "compare": false, 00:18:14.060 "compare_and_write": false, 00:18:14.060 "abort": true, 00:18:14.060 "seek_hole": false, 00:18:14.060 "seek_data": false, 00:18:14.060 "copy": true, 00:18:14.060 "nvme_iov_md": false 00:18:14.060 }, 00:18:14.060 "memory_domains": [ 00:18:14.060 { 00:18:14.060 "dma_device_id": "system", 00:18:14.060 "dma_device_type": 1 00:18:14.060 }, 00:18:14.060 { 00:18:14.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.060 "dma_device_type": 2 00:18:14.060 } 00:18:14.060 ], 00:18:14.060 "driver_specific": {} 00:18:14.060 } 00:18:14.060 ] 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.060 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.318 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.318 "name": "Existed_Raid", 00:18:14.318 "uuid": "978439e0-276f-4998-a03c-f7be96733693", 00:18:14.318 "strip_size_kb": 0, 00:18:14.318 "state": "online", 00:18:14.318 "raid_level": "raid1", 00:18:14.318 "superblock": false, 00:18:14.318 "num_base_bdevs": 3, 00:18:14.318 "num_base_bdevs_discovered": 3, 00:18:14.318 "num_base_bdevs_operational": 3, 00:18:14.318 "base_bdevs_list": [ 00:18:14.318 { 00:18:14.318 "name": "BaseBdev1", 00:18:14.318 "uuid": "5311d781-6427-4c3f-851a-87b9c3364b9c", 00:18:14.318 "is_configured": true, 00:18:14.318 "data_offset": 0, 00:18:14.318 "data_size": 65536 00:18:14.318 }, 00:18:14.318 { 00:18:14.318 "name": "BaseBdev2", 00:18:14.318 "uuid": "f9fcb999-a3da-4721-b950-cfbd5e246490", 00:18:14.318 "is_configured": true, 00:18:14.318 "data_offset": 0, 00:18:14.318 "data_size": 65536 00:18:14.318 }, 00:18:14.318 { 00:18:14.318 "name": "BaseBdev3", 00:18:14.318 "uuid": "61acea31-553d-470b-8aeb-ffbbab39e57f", 00:18:14.318 "is_configured": true, 00:18:14.318 "data_offset": 0, 00:18:14.318 "data_size": 65536 00:18:14.318 } 00:18:14.318 ] 00:18:14.318 }' 00:18:14.318 05:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.318 05:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.576 [2024-11-20 05:28:46.183515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.576 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.577 "name": "Existed_Raid", 00:18:14.577 "aliases": [ 00:18:14.577 "978439e0-276f-4998-a03c-f7be96733693" 00:18:14.577 ], 00:18:14.577 "product_name": "Raid Volume", 00:18:14.577 "block_size": 512, 00:18:14.577 "num_blocks": 65536, 00:18:14.577 "uuid": "978439e0-276f-4998-a03c-f7be96733693", 00:18:14.577 "assigned_rate_limits": { 00:18:14.577 "rw_ios_per_sec": 0, 00:18:14.577 "rw_mbytes_per_sec": 0, 00:18:14.577 "r_mbytes_per_sec": 0, 00:18:14.577 "w_mbytes_per_sec": 0 00:18:14.577 }, 00:18:14.577 "claimed": false, 00:18:14.577 "zoned": false, 00:18:14.577 "supported_io_types": { 00:18:14.577 "read": true, 00:18:14.577 "write": true, 00:18:14.577 "unmap": false, 00:18:14.577 "flush": false, 00:18:14.577 "reset": true, 00:18:14.577 "nvme_admin": false, 00:18:14.577 "nvme_io": false, 00:18:14.577 "nvme_io_md": false, 00:18:14.577 "write_zeroes": true, 00:18:14.577 "zcopy": false, 00:18:14.577 "get_zone_info": false, 00:18:14.577 "zone_management": false, 00:18:14.577 "zone_append": false, 00:18:14.577 "compare": false, 00:18:14.577 "compare_and_write": false, 00:18:14.577 "abort": false, 00:18:14.577 "seek_hole": false, 00:18:14.577 "seek_data": false, 00:18:14.577 "copy": false, 00:18:14.577 "nvme_iov_md": false 00:18:14.577 }, 00:18:14.577 "memory_domains": [ 00:18:14.577 { 00:18:14.577 "dma_device_id": "system", 00:18:14.577 "dma_device_type": 1 00:18:14.577 }, 00:18:14.577 { 00:18:14.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.577 "dma_device_type": 2 00:18:14.577 }, 00:18:14.577 { 00:18:14.577 "dma_device_id": "system", 00:18:14.577 "dma_device_type": 1 00:18:14.577 }, 00:18:14.577 { 00:18:14.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.577 "dma_device_type": 2 00:18:14.577 }, 00:18:14.577 { 00:18:14.577 "dma_device_id": "system", 00:18:14.577 "dma_device_type": 1 00:18:14.577 }, 00:18:14.577 { 00:18:14.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.577 "dma_device_type": 2 00:18:14.577 } 00:18:14.577 ], 00:18:14.577 "driver_specific": { 00:18:14.577 "raid": { 00:18:14.577 "uuid": "978439e0-276f-4998-a03c-f7be96733693", 00:18:14.577 "strip_size_kb": 0, 00:18:14.577 "state": "online", 00:18:14.577 "raid_level": "raid1", 00:18:14.577 "superblock": false, 00:18:14.577 "num_base_bdevs": 3, 00:18:14.577 "num_base_bdevs_discovered": 3, 00:18:14.577 "num_base_bdevs_operational": 3, 00:18:14.577 "base_bdevs_list": [ 00:18:14.577 { 00:18:14.577 "name": "BaseBdev1", 00:18:14.577 "uuid": "5311d781-6427-4c3f-851a-87b9c3364b9c", 00:18:14.577 "is_configured": true, 00:18:14.577 "data_offset": 0, 00:18:14.577 "data_size": 65536 00:18:14.577 }, 00:18:14.577 { 00:18:14.577 "name": "BaseBdev2", 00:18:14.577 "uuid": "f9fcb999-a3da-4721-b950-cfbd5e246490", 00:18:14.577 "is_configured": true, 00:18:14.577 "data_offset": 0, 00:18:14.577 "data_size": 65536 00:18:14.577 }, 00:18:14.577 { 00:18:14.577 "name": "BaseBdev3", 00:18:14.577 "uuid": "61acea31-553d-470b-8aeb-ffbbab39e57f", 00:18:14.577 "is_configured": true, 00:18:14.577 "data_offset": 0, 00:18:14.577 "data_size": 65536 00:18:14.577 } 00:18:14.577 ] 00:18:14.577 } 00:18:14.577 } 00:18:14.577 }' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:14.577 BaseBdev2 00:18:14.577 BaseBdev3' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.577 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.577 [2024-11-20 05:28:46.387242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.835 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.835 "name": "Existed_Raid", 00:18:14.836 "uuid": "978439e0-276f-4998-a03c-f7be96733693", 00:18:14.836 "strip_size_kb": 0, 00:18:14.836 "state": "online", 00:18:14.836 "raid_level": "raid1", 00:18:14.836 "superblock": false, 00:18:14.836 "num_base_bdevs": 3, 00:18:14.836 "num_base_bdevs_discovered": 2, 00:18:14.836 "num_base_bdevs_operational": 2, 00:18:14.836 "base_bdevs_list": [ 00:18:14.836 { 00:18:14.836 "name": null, 00:18:14.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.836 "is_configured": false, 00:18:14.836 "data_offset": 0, 00:18:14.836 "data_size": 65536 00:18:14.836 }, 00:18:14.836 { 00:18:14.836 "name": "BaseBdev2", 00:18:14.836 "uuid": "f9fcb999-a3da-4721-b950-cfbd5e246490", 00:18:14.836 "is_configured": true, 00:18:14.836 "data_offset": 0, 00:18:14.836 "data_size": 65536 00:18:14.836 }, 00:18:14.836 { 00:18:14.836 "name": "BaseBdev3", 00:18:14.836 "uuid": "61acea31-553d-470b-8aeb-ffbbab39e57f", 00:18:14.836 "is_configured": true, 00:18:14.836 "data_offset": 0, 00:18:14.836 "data_size": 65536 00:18:14.836 } 00:18:14.836 ] 00:18:14.836 }' 00:18:14.836 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.836 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:15.093 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.094 [2024-11-20 05:28:46.813552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.094 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.094 [2024-11-20 05:28:46.920162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:15.094 [2024-11-20 05:28:46.920265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.353 [2024-11-20 05:28:46.982495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.353 [2024-11-20 05:28:46.982544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.353 [2024-11-20 05:28:46.982557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:15.353 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.353 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:15.353 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:15.353 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.353 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.353 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.353 05:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:15.353 05:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.353 BaseBdev2 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.353 [ 00:18:15.353 { 00:18:15.353 "name": "BaseBdev2", 00:18:15.353 "aliases": [ 00:18:15.353 "0b4ade37-51e6-4fe8-b057-8da060678158" 00:18:15.353 ], 00:18:15.353 "product_name": "Malloc disk", 00:18:15.353 "block_size": 512, 00:18:15.353 "num_blocks": 65536, 00:18:15.353 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:15.353 "assigned_rate_limits": { 00:18:15.353 "rw_ios_per_sec": 0, 00:18:15.353 "rw_mbytes_per_sec": 0, 00:18:15.353 "r_mbytes_per_sec": 0, 00:18:15.353 "w_mbytes_per_sec": 0 00:18:15.353 }, 00:18:15.353 "claimed": false, 00:18:15.353 "zoned": false, 00:18:15.353 "supported_io_types": { 00:18:15.353 "read": true, 00:18:15.353 "write": true, 00:18:15.353 "unmap": true, 00:18:15.353 "flush": true, 00:18:15.353 "reset": true, 00:18:15.353 "nvme_admin": false, 00:18:15.353 "nvme_io": false, 00:18:15.353 "nvme_io_md": false, 00:18:15.353 "write_zeroes": true, 00:18:15.353 "zcopy": true, 00:18:15.353 "get_zone_info": false, 00:18:15.353 "zone_management": false, 00:18:15.353 "zone_append": false, 00:18:15.353 "compare": false, 00:18:15.353 "compare_and_write": false, 00:18:15.353 "abort": true, 00:18:15.353 "seek_hole": false, 00:18:15.353 "seek_data": false, 00:18:15.353 "copy": true, 00:18:15.353 "nvme_iov_md": false 00:18:15.353 }, 00:18:15.353 "memory_domains": [ 00:18:15.353 { 00:18:15.353 "dma_device_id": "system", 00:18:15.353 "dma_device_type": 1 00:18:15.353 }, 00:18:15.353 { 00:18:15.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.353 "dma_device_type": 2 00:18:15.353 } 00:18:15.353 ], 00:18:15.353 "driver_specific": {} 00:18:15.353 } 00:18:15.353 ] 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.353 BaseBdev3 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.353 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.353 [ 00:18:15.353 { 00:18:15.353 "name": "BaseBdev3", 00:18:15.353 "aliases": [ 00:18:15.353 "4a7a2169-998f-4b3a-a62c-e6224a8e5d45" 00:18:15.353 ], 00:18:15.353 "product_name": "Malloc disk", 00:18:15.353 "block_size": 512, 00:18:15.353 "num_blocks": 65536, 00:18:15.353 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:15.353 "assigned_rate_limits": { 00:18:15.353 "rw_ios_per_sec": 0, 00:18:15.353 "rw_mbytes_per_sec": 0, 00:18:15.353 "r_mbytes_per_sec": 0, 00:18:15.353 "w_mbytes_per_sec": 0 00:18:15.353 }, 00:18:15.353 "claimed": false, 00:18:15.353 "zoned": false, 00:18:15.353 "supported_io_types": { 00:18:15.353 "read": true, 00:18:15.353 "write": true, 00:18:15.353 "unmap": true, 00:18:15.353 "flush": true, 00:18:15.353 "reset": true, 00:18:15.353 "nvme_admin": false, 00:18:15.353 "nvme_io": false, 00:18:15.353 "nvme_io_md": false, 00:18:15.353 "write_zeroes": true, 00:18:15.353 "zcopy": true, 00:18:15.353 "get_zone_info": false, 00:18:15.353 "zone_management": false, 00:18:15.353 "zone_append": false, 00:18:15.353 "compare": false, 00:18:15.353 "compare_and_write": false, 00:18:15.353 "abort": true, 00:18:15.353 "seek_hole": false, 00:18:15.353 "seek_data": false, 00:18:15.353 "copy": true, 00:18:15.353 "nvme_iov_md": false 00:18:15.353 }, 00:18:15.353 "memory_domains": [ 00:18:15.353 { 00:18:15.353 "dma_device_id": "system", 00:18:15.353 "dma_device_type": 1 00:18:15.353 }, 00:18:15.353 { 00:18:15.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.353 "dma_device_type": 2 00:18:15.354 } 00:18:15.354 ], 00:18:15.354 "driver_specific": {} 00:18:15.354 } 00:18:15.354 ] 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.354 [2024-11-20 05:28:47.153222] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.354 [2024-11-20 05:28:47.153385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.354 [2024-11-20 05:28:47.153458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.354 [2024-11-20 05:28:47.155130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.354 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.658 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.658 "name": "Existed_Raid", 00:18:15.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.658 "strip_size_kb": 0, 00:18:15.658 "state": "configuring", 00:18:15.658 "raid_level": "raid1", 00:18:15.658 "superblock": false, 00:18:15.658 "num_base_bdevs": 3, 00:18:15.658 "num_base_bdevs_discovered": 2, 00:18:15.658 "num_base_bdevs_operational": 3, 00:18:15.658 "base_bdevs_list": [ 00:18:15.658 { 00:18:15.658 "name": "BaseBdev1", 00:18:15.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.658 "is_configured": false, 00:18:15.658 "data_offset": 0, 00:18:15.658 "data_size": 0 00:18:15.658 }, 00:18:15.658 { 00:18:15.658 "name": "BaseBdev2", 00:18:15.658 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:15.658 "is_configured": true, 00:18:15.658 "data_offset": 0, 00:18:15.658 "data_size": 65536 00:18:15.658 }, 00:18:15.658 { 00:18:15.658 "name": "BaseBdev3", 00:18:15.658 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:15.658 "is_configured": true, 00:18:15.658 "data_offset": 0, 00:18:15.658 "data_size": 65536 00:18:15.658 } 00:18:15.658 ] 00:18:15.658 }' 00:18:15.658 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.658 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.918 [2024-11-20 05:28:47.481349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.918 "name": "Existed_Raid", 00:18:15.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.918 "strip_size_kb": 0, 00:18:15.918 "state": "configuring", 00:18:15.918 "raid_level": "raid1", 00:18:15.918 "superblock": false, 00:18:15.918 "num_base_bdevs": 3, 00:18:15.918 "num_base_bdevs_discovered": 1, 00:18:15.918 "num_base_bdevs_operational": 3, 00:18:15.918 "base_bdevs_list": [ 00:18:15.918 { 00:18:15.918 "name": "BaseBdev1", 00:18:15.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.918 "is_configured": false, 00:18:15.918 "data_offset": 0, 00:18:15.918 "data_size": 0 00:18:15.918 }, 00:18:15.918 { 00:18:15.918 "name": null, 00:18:15.918 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:15.918 "is_configured": false, 00:18:15.918 "data_offset": 0, 00:18:15.918 "data_size": 65536 00:18:15.918 }, 00:18:15.918 { 00:18:15.918 "name": "BaseBdev3", 00:18:15.918 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:15.918 "is_configured": true, 00:18:15.918 "data_offset": 0, 00:18:15.918 "data_size": 65536 00:18:15.918 } 00:18:15.918 ] 00:18:15.918 }' 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.918 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:16.176 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.176 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.176 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.176 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:16.176 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.177 [2024-11-20 05:28:47.837843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.177 BaseBdev1 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.177 [ 00:18:16.177 { 00:18:16.177 "name": "BaseBdev1", 00:18:16.177 "aliases": [ 00:18:16.177 "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf" 00:18:16.177 ], 00:18:16.177 "product_name": "Malloc disk", 00:18:16.177 "block_size": 512, 00:18:16.177 "num_blocks": 65536, 00:18:16.177 "uuid": "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf", 00:18:16.177 "assigned_rate_limits": { 00:18:16.177 "rw_ios_per_sec": 0, 00:18:16.177 "rw_mbytes_per_sec": 0, 00:18:16.177 "r_mbytes_per_sec": 0, 00:18:16.177 "w_mbytes_per_sec": 0 00:18:16.177 }, 00:18:16.177 "claimed": true, 00:18:16.177 "claim_type": "exclusive_write", 00:18:16.177 "zoned": false, 00:18:16.177 "supported_io_types": { 00:18:16.177 "read": true, 00:18:16.177 "write": true, 00:18:16.177 "unmap": true, 00:18:16.177 "flush": true, 00:18:16.177 "reset": true, 00:18:16.177 "nvme_admin": false, 00:18:16.177 "nvme_io": false, 00:18:16.177 "nvme_io_md": false, 00:18:16.177 "write_zeroes": true, 00:18:16.177 "zcopy": true, 00:18:16.177 "get_zone_info": false, 00:18:16.177 "zone_management": false, 00:18:16.177 "zone_append": false, 00:18:16.177 "compare": false, 00:18:16.177 "compare_and_write": false, 00:18:16.177 "abort": true, 00:18:16.177 "seek_hole": false, 00:18:16.177 "seek_data": false, 00:18:16.177 "copy": true, 00:18:16.177 "nvme_iov_md": false 00:18:16.177 }, 00:18:16.177 "memory_domains": [ 00:18:16.177 { 00:18:16.177 "dma_device_id": "system", 00:18:16.177 "dma_device_type": 1 00:18:16.177 }, 00:18:16.177 { 00:18:16.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.177 "dma_device_type": 2 00:18:16.177 } 00:18:16.177 ], 00:18:16.177 "driver_specific": {} 00:18:16.177 } 00:18:16.177 ] 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.177 "name": "Existed_Raid", 00:18:16.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.177 "strip_size_kb": 0, 00:18:16.177 "state": "configuring", 00:18:16.177 "raid_level": "raid1", 00:18:16.177 "superblock": false, 00:18:16.177 "num_base_bdevs": 3, 00:18:16.177 "num_base_bdevs_discovered": 2, 00:18:16.177 "num_base_bdevs_operational": 3, 00:18:16.177 "base_bdevs_list": [ 00:18:16.177 { 00:18:16.177 "name": "BaseBdev1", 00:18:16.177 "uuid": "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf", 00:18:16.177 "is_configured": true, 00:18:16.177 "data_offset": 0, 00:18:16.177 "data_size": 65536 00:18:16.177 }, 00:18:16.177 { 00:18:16.177 "name": null, 00:18:16.177 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:16.177 "is_configured": false, 00:18:16.177 "data_offset": 0, 00:18:16.177 "data_size": 65536 00:18:16.177 }, 00:18:16.177 { 00:18:16.177 "name": "BaseBdev3", 00:18:16.177 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:16.177 "is_configured": true, 00:18:16.177 "data_offset": 0, 00:18:16.177 "data_size": 65536 00:18:16.177 } 00:18:16.177 ] 00:18:16.177 }' 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.177 05:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.436 [2024-11-20 05:28:48.218015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.436 "name": "Existed_Raid", 00:18:16.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.436 "strip_size_kb": 0, 00:18:16.436 "state": "configuring", 00:18:16.436 "raid_level": "raid1", 00:18:16.436 "superblock": false, 00:18:16.436 "num_base_bdevs": 3, 00:18:16.436 "num_base_bdevs_discovered": 1, 00:18:16.436 "num_base_bdevs_operational": 3, 00:18:16.436 "base_bdevs_list": [ 00:18:16.436 { 00:18:16.436 "name": "BaseBdev1", 00:18:16.436 "uuid": "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf", 00:18:16.436 "is_configured": true, 00:18:16.436 "data_offset": 0, 00:18:16.436 "data_size": 65536 00:18:16.436 }, 00:18:16.436 { 00:18:16.436 "name": null, 00:18:16.436 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:16.436 "is_configured": false, 00:18:16.436 "data_offset": 0, 00:18:16.436 "data_size": 65536 00:18:16.436 }, 00:18:16.436 { 00:18:16.436 "name": null, 00:18:16.436 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:16.436 "is_configured": false, 00:18:16.436 "data_offset": 0, 00:18:16.436 "data_size": 65536 00:18:16.436 } 00:18:16.436 ] 00:18:16.436 }' 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.436 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.003 [2024-11-20 05:28:48.590063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.003 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.004 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.004 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.004 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.004 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.004 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.004 "name": "Existed_Raid", 00:18:17.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.004 "strip_size_kb": 0, 00:18:17.004 "state": "configuring", 00:18:17.004 "raid_level": "raid1", 00:18:17.004 "superblock": false, 00:18:17.004 "num_base_bdevs": 3, 00:18:17.004 "num_base_bdevs_discovered": 2, 00:18:17.004 "num_base_bdevs_operational": 3, 00:18:17.004 "base_bdevs_list": [ 00:18:17.004 { 00:18:17.004 "name": "BaseBdev1", 00:18:17.004 "uuid": "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf", 00:18:17.004 "is_configured": true, 00:18:17.004 "data_offset": 0, 00:18:17.004 "data_size": 65536 00:18:17.004 }, 00:18:17.004 { 00:18:17.004 "name": null, 00:18:17.004 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:17.004 "is_configured": false, 00:18:17.004 "data_offset": 0, 00:18:17.004 "data_size": 65536 00:18:17.004 }, 00:18:17.004 { 00:18:17.004 "name": "BaseBdev3", 00:18:17.004 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:17.004 "is_configured": true, 00:18:17.004 "data_offset": 0, 00:18:17.004 "data_size": 65536 00:18:17.004 } 00:18:17.004 ] 00:18:17.004 }' 00:18:17.004 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.004 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.262 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.262 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.262 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.262 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:17.262 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.262 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:17.262 05:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:17.262 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.262 05:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.262 [2024-11-20 05:28:48.970169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.262 "name": "Existed_Raid", 00:18:17.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.262 "strip_size_kb": 0, 00:18:17.262 "state": "configuring", 00:18:17.262 "raid_level": "raid1", 00:18:17.262 "superblock": false, 00:18:17.262 "num_base_bdevs": 3, 00:18:17.262 "num_base_bdevs_discovered": 1, 00:18:17.262 "num_base_bdevs_operational": 3, 00:18:17.262 "base_bdevs_list": [ 00:18:17.262 { 00:18:17.262 "name": null, 00:18:17.262 "uuid": "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf", 00:18:17.262 "is_configured": false, 00:18:17.262 "data_offset": 0, 00:18:17.262 "data_size": 65536 00:18:17.262 }, 00:18:17.262 { 00:18:17.262 "name": null, 00:18:17.262 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:17.262 "is_configured": false, 00:18:17.262 "data_offset": 0, 00:18:17.262 "data_size": 65536 00:18:17.262 }, 00:18:17.262 { 00:18:17.262 "name": "BaseBdev3", 00:18:17.262 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:17.262 "is_configured": true, 00:18:17.262 "data_offset": 0, 00:18:17.262 "data_size": 65536 00:18:17.262 } 00:18:17.262 ] 00:18:17.262 }' 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.262 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.532 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:17.532 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.532 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.532 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.532 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.796 [2024-11-20 05:28:49.378853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.796 "name": "Existed_Raid", 00:18:17.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.796 "strip_size_kb": 0, 00:18:17.796 "state": "configuring", 00:18:17.796 "raid_level": "raid1", 00:18:17.796 "superblock": false, 00:18:17.796 "num_base_bdevs": 3, 00:18:17.796 "num_base_bdevs_discovered": 2, 00:18:17.796 "num_base_bdevs_operational": 3, 00:18:17.796 "base_bdevs_list": [ 00:18:17.796 { 00:18:17.796 "name": null, 00:18:17.796 "uuid": "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf", 00:18:17.796 "is_configured": false, 00:18:17.796 "data_offset": 0, 00:18:17.796 "data_size": 65536 00:18:17.796 }, 00:18:17.796 { 00:18:17.796 "name": "BaseBdev2", 00:18:17.796 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:17.796 "is_configured": true, 00:18:17.796 "data_offset": 0, 00:18:17.796 "data_size": 65536 00:18:17.796 }, 00:18:17.796 { 00:18:17.796 "name": "BaseBdev3", 00:18:17.796 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:17.796 "is_configured": true, 00:18:17.796 "data_offset": 0, 00:18:17.796 "data_size": 65536 00:18:17.796 } 00:18:17.796 ] 00:18:17.796 }' 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.796 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.054 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 [2024-11-20 05:28:49.786995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:18.054 [2024-11-20 05:28:49.787036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:18.054 [2024-11-20 05:28:49.787041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:18.054 [2024-11-20 05:28:49.787248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:18.054 [2024-11-20 05:28:49.787386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:18.054 [2024-11-20 05:28:49.787396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:18.054 [2024-11-20 05:28:49.787593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.054 NewBaseBdev 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 [ 00:18:18.055 { 00:18:18.055 "name": "NewBaseBdev", 00:18:18.055 "aliases": [ 00:18:18.055 "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf" 00:18:18.055 ], 00:18:18.055 "product_name": "Malloc disk", 00:18:18.055 "block_size": 512, 00:18:18.055 "num_blocks": 65536, 00:18:18.055 "uuid": "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf", 00:18:18.055 "assigned_rate_limits": { 00:18:18.055 "rw_ios_per_sec": 0, 00:18:18.055 "rw_mbytes_per_sec": 0, 00:18:18.055 "r_mbytes_per_sec": 0, 00:18:18.055 "w_mbytes_per_sec": 0 00:18:18.055 }, 00:18:18.055 "claimed": true, 00:18:18.055 "claim_type": "exclusive_write", 00:18:18.055 "zoned": false, 00:18:18.055 "supported_io_types": { 00:18:18.055 "read": true, 00:18:18.055 "write": true, 00:18:18.055 "unmap": true, 00:18:18.055 "flush": true, 00:18:18.055 "reset": true, 00:18:18.055 "nvme_admin": false, 00:18:18.055 "nvme_io": false, 00:18:18.055 "nvme_io_md": false, 00:18:18.055 "write_zeroes": true, 00:18:18.055 "zcopy": true, 00:18:18.055 "get_zone_info": false, 00:18:18.055 "zone_management": false, 00:18:18.055 "zone_append": false, 00:18:18.055 "compare": false, 00:18:18.055 "compare_and_write": false, 00:18:18.055 "abort": true, 00:18:18.055 "seek_hole": false, 00:18:18.055 "seek_data": false, 00:18:18.055 "copy": true, 00:18:18.055 "nvme_iov_md": false 00:18:18.055 }, 00:18:18.055 "memory_domains": [ 00:18:18.055 { 00:18:18.055 "dma_device_id": "system", 00:18:18.055 "dma_device_type": 1 00:18:18.055 }, 00:18:18.055 { 00:18:18.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.055 "dma_device_type": 2 00:18:18.055 } 00:18:18.055 ], 00:18:18.055 "driver_specific": {} 00:18:18.055 } 00:18:18.055 ] 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.055 "name": "Existed_Raid", 00:18:18.055 "uuid": "87ef7791-caa8-4080-b8ff-43e7b7459b9d", 00:18:18.055 "strip_size_kb": 0, 00:18:18.055 "state": "online", 00:18:18.055 "raid_level": "raid1", 00:18:18.055 "superblock": false, 00:18:18.055 "num_base_bdevs": 3, 00:18:18.055 "num_base_bdevs_discovered": 3, 00:18:18.055 "num_base_bdevs_operational": 3, 00:18:18.055 "base_bdevs_list": [ 00:18:18.055 { 00:18:18.055 "name": "NewBaseBdev", 00:18:18.055 "uuid": "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf", 00:18:18.055 "is_configured": true, 00:18:18.055 "data_offset": 0, 00:18:18.055 "data_size": 65536 00:18:18.055 }, 00:18:18.055 { 00:18:18.055 "name": "BaseBdev2", 00:18:18.055 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:18.055 "is_configured": true, 00:18:18.055 "data_offset": 0, 00:18:18.055 "data_size": 65536 00:18:18.055 }, 00:18:18.055 { 00:18:18.055 "name": "BaseBdev3", 00:18:18.055 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:18.055 "is_configured": true, 00:18:18.055 "data_offset": 0, 00:18:18.055 "data_size": 65536 00:18:18.055 } 00:18:18.055 ] 00:18:18.055 }' 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.055 05:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:18.573 [2024-11-20 05:28:50.147416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.573 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.573 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:18.573 "name": "Existed_Raid", 00:18:18.573 "aliases": [ 00:18:18.573 "87ef7791-caa8-4080-b8ff-43e7b7459b9d" 00:18:18.573 ], 00:18:18.574 "product_name": "Raid Volume", 00:18:18.574 "block_size": 512, 00:18:18.574 "num_blocks": 65536, 00:18:18.574 "uuid": "87ef7791-caa8-4080-b8ff-43e7b7459b9d", 00:18:18.574 "assigned_rate_limits": { 00:18:18.574 "rw_ios_per_sec": 0, 00:18:18.574 "rw_mbytes_per_sec": 0, 00:18:18.574 "r_mbytes_per_sec": 0, 00:18:18.574 "w_mbytes_per_sec": 0 00:18:18.574 }, 00:18:18.574 "claimed": false, 00:18:18.574 "zoned": false, 00:18:18.574 "supported_io_types": { 00:18:18.574 "read": true, 00:18:18.574 "write": true, 00:18:18.574 "unmap": false, 00:18:18.574 "flush": false, 00:18:18.574 "reset": true, 00:18:18.574 "nvme_admin": false, 00:18:18.574 "nvme_io": false, 00:18:18.574 "nvme_io_md": false, 00:18:18.574 "write_zeroes": true, 00:18:18.574 "zcopy": false, 00:18:18.574 "get_zone_info": false, 00:18:18.574 "zone_management": false, 00:18:18.574 "zone_append": false, 00:18:18.574 "compare": false, 00:18:18.574 "compare_and_write": false, 00:18:18.574 "abort": false, 00:18:18.574 "seek_hole": false, 00:18:18.574 "seek_data": false, 00:18:18.574 "copy": false, 00:18:18.574 "nvme_iov_md": false 00:18:18.574 }, 00:18:18.574 "memory_domains": [ 00:18:18.574 { 00:18:18.574 "dma_device_id": "system", 00:18:18.574 "dma_device_type": 1 00:18:18.574 }, 00:18:18.574 { 00:18:18.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.574 "dma_device_type": 2 00:18:18.574 }, 00:18:18.574 { 00:18:18.574 "dma_device_id": "system", 00:18:18.574 "dma_device_type": 1 00:18:18.574 }, 00:18:18.574 { 00:18:18.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.574 "dma_device_type": 2 00:18:18.574 }, 00:18:18.574 { 00:18:18.574 "dma_device_id": "system", 00:18:18.574 "dma_device_type": 1 00:18:18.574 }, 00:18:18.574 { 00:18:18.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.574 "dma_device_type": 2 00:18:18.574 } 00:18:18.574 ], 00:18:18.574 "driver_specific": { 00:18:18.574 "raid": { 00:18:18.574 "uuid": "87ef7791-caa8-4080-b8ff-43e7b7459b9d", 00:18:18.574 "strip_size_kb": 0, 00:18:18.574 "state": "online", 00:18:18.574 "raid_level": "raid1", 00:18:18.574 "superblock": false, 00:18:18.574 "num_base_bdevs": 3, 00:18:18.574 "num_base_bdevs_discovered": 3, 00:18:18.574 "num_base_bdevs_operational": 3, 00:18:18.574 "base_bdevs_list": [ 00:18:18.574 { 00:18:18.574 "name": "NewBaseBdev", 00:18:18.574 "uuid": "8ec5249a-9a1c-46ca-9aee-4e8a1250f7bf", 00:18:18.574 "is_configured": true, 00:18:18.574 "data_offset": 0, 00:18:18.574 "data_size": 65536 00:18:18.574 }, 00:18:18.574 { 00:18:18.574 "name": "BaseBdev2", 00:18:18.574 "uuid": "0b4ade37-51e6-4fe8-b057-8da060678158", 00:18:18.574 "is_configured": true, 00:18:18.574 "data_offset": 0, 00:18:18.574 "data_size": 65536 00:18:18.574 }, 00:18:18.574 { 00:18:18.574 "name": "BaseBdev3", 00:18:18.574 "uuid": "4a7a2169-998f-4b3a-a62c-e6224a8e5d45", 00:18:18.574 "is_configured": true, 00:18:18.574 "data_offset": 0, 00:18:18.574 "data_size": 65536 00:18:18.574 } 00:18:18.574 ] 00:18:18.574 } 00:18:18.574 } 00:18:18.574 }' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:18.574 BaseBdev2 00:18:18.574 BaseBdev3' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 [2024-11-20 05:28:50.319130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.574 [2024-11-20 05:28:50.319161] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.574 [2024-11-20 05:28:50.319230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.574 [2024-11-20 05:28:50.319475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.574 [2024-11-20 05:28:50.319485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65846 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65846 ']' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65846 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65846 00:18:18.574 killing process with pid 65846 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65846' 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65846 00:18:18.574 [2024-11-20 05:28:50.348523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:18.574 05:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65846 00:18:18.833 [2024-11-20 05:28:50.502683] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:19.398 00:18:19.398 real 0m7.617s 00:18:19.398 user 0m12.132s 00:18:19.398 sys 0m1.339s 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:19.398 ************************************ 00:18:19.398 END TEST raid_state_function_test 00:18:19.398 ************************************ 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.398 05:28:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:18:19.398 05:28:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:19.398 05:28:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:19.398 05:28:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.398 ************************************ 00:18:19.398 START TEST raid_state_function_test_sb 00:18:19.398 ************************************ 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.398 Process raid pid: 66440 00:18:19.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:19.398 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66440 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66440' 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66440 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66440 ']' 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.399 05:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:19.399 [2024-11-20 05:28:51.212397] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:19.399 [2024-11-20 05:28:51.212520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.657 [2024-11-20 05:28:51.367549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.657 [2024-11-20 05:28:51.480040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.914 [2024-11-20 05:28:51.628969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.914 [2024-11-20 05:28:51.629021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.480 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.481 [2024-11-20 05:28:52.065013] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.481 [2024-11-20 05:28:52.065069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.481 [2024-11-20 05:28:52.065079] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.481 [2024-11-20 05:28:52.065089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.481 [2024-11-20 05:28:52.065096] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.481 [2024-11-20 05:28:52.065105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.481 "name": "Existed_Raid", 00:18:20.481 "uuid": "efb02010-0ea5-4586-a224-c065acffec08", 00:18:20.481 "strip_size_kb": 0, 00:18:20.481 "state": "configuring", 00:18:20.481 "raid_level": "raid1", 00:18:20.481 "superblock": true, 00:18:20.481 "num_base_bdevs": 3, 00:18:20.481 "num_base_bdevs_discovered": 0, 00:18:20.481 "num_base_bdevs_operational": 3, 00:18:20.481 "base_bdevs_list": [ 00:18:20.481 { 00:18:20.481 "name": "BaseBdev1", 00:18:20.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.481 "is_configured": false, 00:18:20.481 "data_offset": 0, 00:18:20.481 "data_size": 0 00:18:20.481 }, 00:18:20.481 { 00:18:20.481 "name": "BaseBdev2", 00:18:20.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.481 "is_configured": false, 00:18:20.481 "data_offset": 0, 00:18:20.481 "data_size": 0 00:18:20.481 }, 00:18:20.481 { 00:18:20.481 "name": "BaseBdev3", 00:18:20.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.481 "is_configured": false, 00:18:20.481 "data_offset": 0, 00:18:20.481 "data_size": 0 00:18:20.481 } 00:18:20.481 ] 00:18:20.481 }' 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.481 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.739 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:20.739 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.739 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.739 [2024-11-20 05:28:52.389041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.739 [2024-11-20 05:28:52.389080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:20.739 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.739 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:20.739 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.739 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.739 [2024-11-20 05:28:52.397038] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.739 [2024-11-20 05:28:52.397167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.739 [2024-11-20 05:28:52.397226] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.739 [2024-11-20 05:28:52.397299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.739 [2024-11-20 05:28:52.397353] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.739 [2024-11-20 05:28:52.397398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.740 [2024-11-20 05:28:52.432405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.740 BaseBdev1 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.740 [ 00:18:20.740 { 00:18:20.740 "name": "BaseBdev1", 00:18:20.740 "aliases": [ 00:18:20.740 "6f35fd92-0c9a-43af-aab6-f4bdf49f2ed2" 00:18:20.740 ], 00:18:20.740 "product_name": "Malloc disk", 00:18:20.740 "block_size": 512, 00:18:20.740 "num_blocks": 65536, 00:18:20.740 "uuid": "6f35fd92-0c9a-43af-aab6-f4bdf49f2ed2", 00:18:20.740 "assigned_rate_limits": { 00:18:20.740 "rw_ios_per_sec": 0, 00:18:20.740 "rw_mbytes_per_sec": 0, 00:18:20.740 "r_mbytes_per_sec": 0, 00:18:20.740 "w_mbytes_per_sec": 0 00:18:20.740 }, 00:18:20.740 "claimed": true, 00:18:20.740 "claim_type": "exclusive_write", 00:18:20.740 "zoned": false, 00:18:20.740 "supported_io_types": { 00:18:20.740 "read": true, 00:18:20.740 "write": true, 00:18:20.740 "unmap": true, 00:18:20.740 "flush": true, 00:18:20.740 "reset": true, 00:18:20.740 "nvme_admin": false, 00:18:20.740 "nvme_io": false, 00:18:20.740 "nvme_io_md": false, 00:18:20.740 "write_zeroes": true, 00:18:20.740 "zcopy": true, 00:18:20.740 "get_zone_info": false, 00:18:20.740 "zone_management": false, 00:18:20.740 "zone_append": false, 00:18:20.740 "compare": false, 00:18:20.740 "compare_and_write": false, 00:18:20.740 "abort": true, 00:18:20.740 "seek_hole": false, 00:18:20.740 "seek_data": false, 00:18:20.740 "copy": true, 00:18:20.740 "nvme_iov_md": false 00:18:20.740 }, 00:18:20.740 "memory_domains": [ 00:18:20.740 { 00:18:20.740 "dma_device_id": "system", 00:18:20.740 "dma_device_type": 1 00:18:20.740 }, 00:18:20.740 { 00:18:20.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.740 "dma_device_type": 2 00:18:20.740 } 00:18:20.740 ], 00:18:20.740 "driver_specific": {} 00:18:20.740 } 00:18:20.740 ] 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.740 "name": "Existed_Raid", 00:18:20.740 "uuid": "5a97e20d-eec8-406c-9776-a931d176978d", 00:18:20.740 "strip_size_kb": 0, 00:18:20.740 "state": "configuring", 00:18:20.740 "raid_level": "raid1", 00:18:20.740 "superblock": true, 00:18:20.740 "num_base_bdevs": 3, 00:18:20.740 "num_base_bdevs_discovered": 1, 00:18:20.740 "num_base_bdevs_operational": 3, 00:18:20.740 "base_bdevs_list": [ 00:18:20.740 { 00:18:20.740 "name": "BaseBdev1", 00:18:20.740 "uuid": "6f35fd92-0c9a-43af-aab6-f4bdf49f2ed2", 00:18:20.740 "is_configured": true, 00:18:20.740 "data_offset": 2048, 00:18:20.740 "data_size": 63488 00:18:20.740 }, 00:18:20.740 { 00:18:20.740 "name": "BaseBdev2", 00:18:20.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.740 "is_configured": false, 00:18:20.740 "data_offset": 0, 00:18:20.740 "data_size": 0 00:18:20.740 }, 00:18:20.740 { 00:18:20.740 "name": "BaseBdev3", 00:18:20.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.740 "is_configured": false, 00:18:20.740 "data_offset": 0, 00:18:20.740 "data_size": 0 00:18:20.740 } 00:18:20.740 ] 00:18:20.740 }' 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.740 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.998 [2024-11-20 05:28:52.772528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.998 [2024-11-20 05:28:52.772587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.998 [2024-11-20 05:28:52.780564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.998 [2024-11-20 05:28:52.782508] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.998 [2024-11-20 05:28:52.782550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.998 [2024-11-20 05:28:52.782559] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.998 [2024-11-20 05:28:52.782569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.998 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.998 "name": "Existed_Raid", 00:18:20.998 "uuid": "af693df8-6070-4a67-bbac-c3b29b455962", 00:18:20.998 "strip_size_kb": 0, 00:18:20.998 "state": "configuring", 00:18:20.998 "raid_level": "raid1", 00:18:20.998 "superblock": true, 00:18:20.998 "num_base_bdevs": 3, 00:18:20.998 "num_base_bdevs_discovered": 1, 00:18:20.998 "num_base_bdevs_operational": 3, 00:18:20.998 "base_bdevs_list": [ 00:18:20.998 { 00:18:20.998 "name": "BaseBdev1", 00:18:20.999 "uuid": "6f35fd92-0c9a-43af-aab6-f4bdf49f2ed2", 00:18:20.999 "is_configured": true, 00:18:20.999 "data_offset": 2048, 00:18:20.999 "data_size": 63488 00:18:20.999 }, 00:18:20.999 { 00:18:20.999 "name": "BaseBdev2", 00:18:20.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.999 "is_configured": false, 00:18:20.999 "data_offset": 0, 00:18:20.999 "data_size": 0 00:18:20.999 }, 00:18:20.999 { 00:18:20.999 "name": "BaseBdev3", 00:18:20.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.999 "is_configured": false, 00:18:20.999 "data_offset": 0, 00:18:20.999 "data_size": 0 00:18:20.999 } 00:18:20.999 ] 00:18:20.999 }' 00:18:20.999 05:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.999 05:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:21.257 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.257 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.515 [2024-11-20 05:28:53.105274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.515 BaseBdev2 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.515 [ 00:18:21.515 { 00:18:21.515 "name": "BaseBdev2", 00:18:21.515 "aliases": [ 00:18:21.515 "6457cb38-af66-4bdc-a06c-f3034fc799cb" 00:18:21.515 ], 00:18:21.515 "product_name": "Malloc disk", 00:18:21.515 "block_size": 512, 00:18:21.515 "num_blocks": 65536, 00:18:21.515 "uuid": "6457cb38-af66-4bdc-a06c-f3034fc799cb", 00:18:21.515 "assigned_rate_limits": { 00:18:21.515 "rw_ios_per_sec": 0, 00:18:21.515 "rw_mbytes_per_sec": 0, 00:18:21.515 "r_mbytes_per_sec": 0, 00:18:21.515 "w_mbytes_per_sec": 0 00:18:21.515 }, 00:18:21.515 "claimed": true, 00:18:21.515 "claim_type": "exclusive_write", 00:18:21.515 "zoned": false, 00:18:21.515 "supported_io_types": { 00:18:21.515 "read": true, 00:18:21.515 "write": true, 00:18:21.515 "unmap": true, 00:18:21.515 "flush": true, 00:18:21.515 "reset": true, 00:18:21.515 "nvme_admin": false, 00:18:21.515 "nvme_io": false, 00:18:21.515 "nvme_io_md": false, 00:18:21.515 "write_zeroes": true, 00:18:21.515 "zcopy": true, 00:18:21.515 "get_zone_info": false, 00:18:21.515 "zone_management": false, 00:18:21.515 "zone_append": false, 00:18:21.515 "compare": false, 00:18:21.515 "compare_and_write": false, 00:18:21.515 "abort": true, 00:18:21.515 "seek_hole": false, 00:18:21.515 "seek_data": false, 00:18:21.515 "copy": true, 00:18:21.515 "nvme_iov_md": false 00:18:21.515 }, 00:18:21.515 "memory_domains": [ 00:18:21.515 { 00:18:21.515 "dma_device_id": "system", 00:18:21.515 "dma_device_type": 1 00:18:21.515 }, 00:18:21.515 { 00:18:21.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.515 "dma_device_type": 2 00:18:21.515 } 00:18:21.515 ], 00:18:21.515 "driver_specific": {} 00:18:21.515 } 00:18:21.515 ] 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.515 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.516 "name": "Existed_Raid", 00:18:21.516 "uuid": "af693df8-6070-4a67-bbac-c3b29b455962", 00:18:21.516 "strip_size_kb": 0, 00:18:21.516 "state": "configuring", 00:18:21.516 "raid_level": "raid1", 00:18:21.516 "superblock": true, 00:18:21.516 "num_base_bdevs": 3, 00:18:21.516 "num_base_bdevs_discovered": 2, 00:18:21.516 "num_base_bdevs_operational": 3, 00:18:21.516 "base_bdevs_list": [ 00:18:21.516 { 00:18:21.516 "name": "BaseBdev1", 00:18:21.516 "uuid": "6f35fd92-0c9a-43af-aab6-f4bdf49f2ed2", 00:18:21.516 "is_configured": true, 00:18:21.516 "data_offset": 2048, 00:18:21.516 "data_size": 63488 00:18:21.516 }, 00:18:21.516 { 00:18:21.516 "name": "BaseBdev2", 00:18:21.516 "uuid": "6457cb38-af66-4bdc-a06c-f3034fc799cb", 00:18:21.516 "is_configured": true, 00:18:21.516 "data_offset": 2048, 00:18:21.516 "data_size": 63488 00:18:21.516 }, 00:18:21.516 { 00:18:21.516 "name": "BaseBdev3", 00:18:21.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.516 "is_configured": false, 00:18:21.516 "data_offset": 0, 00:18:21.516 "data_size": 0 00:18:21.516 } 00:18:21.516 ] 00:18:21.516 }' 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.516 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.774 [2024-11-20 05:28:53.476212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:21.774 [2024-11-20 05:28:53.476477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:21.774 [2024-11-20 05:28:53.476495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:21.774 [2024-11-20 05:28:53.476740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:21.774 [2024-11-20 05:28:53.476875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:21.774 [2024-11-20 05:28:53.476883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:21.774 BaseBdev3 00:18:21.774 [2024-11-20 05:28:53.477006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.774 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.774 [ 00:18:21.774 { 00:18:21.774 "name": "BaseBdev3", 00:18:21.774 "aliases": [ 00:18:21.774 "701c407e-1509-44d4-bbec-483fae6b563a" 00:18:21.774 ], 00:18:21.774 "product_name": "Malloc disk", 00:18:21.774 "block_size": 512, 00:18:21.774 "num_blocks": 65536, 00:18:21.774 "uuid": "701c407e-1509-44d4-bbec-483fae6b563a", 00:18:21.774 "assigned_rate_limits": { 00:18:21.774 "rw_ios_per_sec": 0, 00:18:21.774 "rw_mbytes_per_sec": 0, 00:18:21.774 "r_mbytes_per_sec": 0, 00:18:21.774 "w_mbytes_per_sec": 0 00:18:21.774 }, 00:18:21.774 "claimed": true, 00:18:21.774 "claim_type": "exclusive_write", 00:18:21.774 "zoned": false, 00:18:21.774 "supported_io_types": { 00:18:21.774 "read": true, 00:18:21.775 "write": true, 00:18:21.775 "unmap": true, 00:18:21.775 "flush": true, 00:18:21.775 "reset": true, 00:18:21.775 "nvme_admin": false, 00:18:21.775 "nvme_io": false, 00:18:21.775 "nvme_io_md": false, 00:18:21.775 "write_zeroes": true, 00:18:21.775 "zcopy": true, 00:18:21.775 "get_zone_info": false, 00:18:21.775 "zone_management": false, 00:18:21.775 "zone_append": false, 00:18:21.775 "compare": false, 00:18:21.775 "compare_and_write": false, 00:18:21.775 "abort": true, 00:18:21.775 "seek_hole": false, 00:18:21.775 "seek_data": false, 00:18:21.775 "copy": true, 00:18:21.775 "nvme_iov_md": false 00:18:21.775 }, 00:18:21.775 "memory_domains": [ 00:18:21.775 { 00:18:21.775 "dma_device_id": "system", 00:18:21.775 "dma_device_type": 1 00:18:21.775 }, 00:18:21.775 { 00:18:21.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.775 "dma_device_type": 2 00:18:21.775 } 00:18:21.775 ], 00:18:21.775 "driver_specific": {} 00:18:21.775 } 00:18:21.775 ] 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.775 "name": "Existed_Raid", 00:18:21.775 "uuid": "af693df8-6070-4a67-bbac-c3b29b455962", 00:18:21.775 "strip_size_kb": 0, 00:18:21.775 "state": "online", 00:18:21.775 "raid_level": "raid1", 00:18:21.775 "superblock": true, 00:18:21.775 "num_base_bdevs": 3, 00:18:21.775 "num_base_bdevs_discovered": 3, 00:18:21.775 "num_base_bdevs_operational": 3, 00:18:21.775 "base_bdevs_list": [ 00:18:21.775 { 00:18:21.775 "name": "BaseBdev1", 00:18:21.775 "uuid": "6f35fd92-0c9a-43af-aab6-f4bdf49f2ed2", 00:18:21.775 "is_configured": true, 00:18:21.775 "data_offset": 2048, 00:18:21.775 "data_size": 63488 00:18:21.775 }, 00:18:21.775 { 00:18:21.775 "name": "BaseBdev2", 00:18:21.775 "uuid": "6457cb38-af66-4bdc-a06c-f3034fc799cb", 00:18:21.775 "is_configured": true, 00:18:21.775 "data_offset": 2048, 00:18:21.775 "data_size": 63488 00:18:21.775 }, 00:18:21.775 { 00:18:21.775 "name": "BaseBdev3", 00:18:21.775 "uuid": "701c407e-1509-44d4-bbec-483fae6b563a", 00:18:21.775 "is_configured": true, 00:18:21.775 "data_offset": 2048, 00:18:21.775 "data_size": 63488 00:18:21.775 } 00:18:21.775 ] 00:18:21.775 }' 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.775 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.032 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.032 [2024-11-20 05:28:53.848626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.289 "name": "Existed_Raid", 00:18:22.289 "aliases": [ 00:18:22.289 "af693df8-6070-4a67-bbac-c3b29b455962" 00:18:22.289 ], 00:18:22.289 "product_name": "Raid Volume", 00:18:22.289 "block_size": 512, 00:18:22.289 "num_blocks": 63488, 00:18:22.289 "uuid": "af693df8-6070-4a67-bbac-c3b29b455962", 00:18:22.289 "assigned_rate_limits": { 00:18:22.289 "rw_ios_per_sec": 0, 00:18:22.289 "rw_mbytes_per_sec": 0, 00:18:22.289 "r_mbytes_per_sec": 0, 00:18:22.289 "w_mbytes_per_sec": 0 00:18:22.289 }, 00:18:22.289 "claimed": false, 00:18:22.289 "zoned": false, 00:18:22.289 "supported_io_types": { 00:18:22.289 "read": true, 00:18:22.289 "write": true, 00:18:22.289 "unmap": false, 00:18:22.289 "flush": false, 00:18:22.289 "reset": true, 00:18:22.289 "nvme_admin": false, 00:18:22.289 "nvme_io": false, 00:18:22.289 "nvme_io_md": false, 00:18:22.289 "write_zeroes": true, 00:18:22.289 "zcopy": false, 00:18:22.289 "get_zone_info": false, 00:18:22.289 "zone_management": false, 00:18:22.289 "zone_append": false, 00:18:22.289 "compare": false, 00:18:22.289 "compare_and_write": false, 00:18:22.289 "abort": false, 00:18:22.289 "seek_hole": false, 00:18:22.289 "seek_data": false, 00:18:22.289 "copy": false, 00:18:22.289 "nvme_iov_md": false 00:18:22.289 }, 00:18:22.289 "memory_domains": [ 00:18:22.289 { 00:18:22.289 "dma_device_id": "system", 00:18:22.289 "dma_device_type": 1 00:18:22.289 }, 00:18:22.289 { 00:18:22.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.289 "dma_device_type": 2 00:18:22.289 }, 00:18:22.289 { 00:18:22.289 "dma_device_id": "system", 00:18:22.289 "dma_device_type": 1 00:18:22.289 }, 00:18:22.289 { 00:18:22.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.289 "dma_device_type": 2 00:18:22.289 }, 00:18:22.289 { 00:18:22.289 "dma_device_id": "system", 00:18:22.289 "dma_device_type": 1 00:18:22.289 }, 00:18:22.289 { 00:18:22.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.289 "dma_device_type": 2 00:18:22.289 } 00:18:22.289 ], 00:18:22.289 "driver_specific": { 00:18:22.289 "raid": { 00:18:22.289 "uuid": "af693df8-6070-4a67-bbac-c3b29b455962", 00:18:22.289 "strip_size_kb": 0, 00:18:22.289 "state": "online", 00:18:22.289 "raid_level": "raid1", 00:18:22.289 "superblock": true, 00:18:22.289 "num_base_bdevs": 3, 00:18:22.289 "num_base_bdevs_discovered": 3, 00:18:22.289 "num_base_bdevs_operational": 3, 00:18:22.289 "base_bdevs_list": [ 00:18:22.289 { 00:18:22.289 "name": "BaseBdev1", 00:18:22.289 "uuid": "6f35fd92-0c9a-43af-aab6-f4bdf49f2ed2", 00:18:22.289 "is_configured": true, 00:18:22.289 "data_offset": 2048, 00:18:22.289 "data_size": 63488 00:18:22.289 }, 00:18:22.289 { 00:18:22.289 "name": "BaseBdev2", 00:18:22.289 "uuid": "6457cb38-af66-4bdc-a06c-f3034fc799cb", 00:18:22.289 "is_configured": true, 00:18:22.289 "data_offset": 2048, 00:18:22.289 "data_size": 63488 00:18:22.289 }, 00:18:22.289 { 00:18:22.289 "name": "BaseBdev3", 00:18:22.289 "uuid": "701c407e-1509-44d4-bbec-483fae6b563a", 00:18:22.289 "is_configured": true, 00:18:22.289 "data_offset": 2048, 00:18:22.289 "data_size": 63488 00:18:22.289 } 00:18:22.289 ] 00:18:22.289 } 00:18:22.289 } 00:18:22.289 }' 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:22.289 BaseBdev2 00:18:22.289 BaseBdev3' 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:22.289 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.290 05:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.290 [2024-11-20 05:28:54.028426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.290 "name": "Existed_Raid", 00:18:22.290 "uuid": "af693df8-6070-4a67-bbac-c3b29b455962", 00:18:22.290 "strip_size_kb": 0, 00:18:22.290 "state": "online", 00:18:22.290 "raid_level": "raid1", 00:18:22.290 "superblock": true, 00:18:22.290 "num_base_bdevs": 3, 00:18:22.290 "num_base_bdevs_discovered": 2, 00:18:22.290 "num_base_bdevs_operational": 2, 00:18:22.290 "base_bdevs_list": [ 00:18:22.290 { 00:18:22.290 "name": null, 00:18:22.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.290 "is_configured": false, 00:18:22.290 "data_offset": 0, 00:18:22.290 "data_size": 63488 00:18:22.290 }, 00:18:22.290 { 00:18:22.290 "name": "BaseBdev2", 00:18:22.290 "uuid": "6457cb38-af66-4bdc-a06c-f3034fc799cb", 00:18:22.290 "is_configured": true, 00:18:22.290 "data_offset": 2048, 00:18:22.290 "data_size": 63488 00:18:22.290 }, 00:18:22.290 { 00:18:22.290 "name": "BaseBdev3", 00:18:22.290 "uuid": "701c407e-1509-44d4-bbec-483fae6b563a", 00:18:22.290 "is_configured": true, 00:18:22.290 "data_offset": 2048, 00:18:22.290 "data_size": 63488 00:18:22.290 } 00:18:22.290 ] 00:18:22.290 }' 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.290 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 [2024-11-20 05:28:54.454550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 [2024-11-20 05:28:54.536505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:22.854 [2024-11-20 05:28:54.536608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.854 [2024-11-20 05:28:54.587126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.854 [2024-11-20 05:28:54.587340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.854 [2024-11-20 05:28:54.587450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 BaseBdev2 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 [ 00:18:22.854 { 00:18:22.854 "name": "BaseBdev2", 00:18:22.854 "aliases": [ 00:18:22.854 "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6" 00:18:22.854 ], 00:18:22.854 "product_name": "Malloc disk", 00:18:22.854 "block_size": 512, 00:18:22.854 "num_blocks": 65536, 00:18:22.854 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:22.854 "assigned_rate_limits": { 00:18:22.854 "rw_ios_per_sec": 0, 00:18:22.854 "rw_mbytes_per_sec": 0, 00:18:22.854 "r_mbytes_per_sec": 0, 00:18:22.854 "w_mbytes_per_sec": 0 00:18:22.854 }, 00:18:22.854 "claimed": false, 00:18:22.854 "zoned": false, 00:18:22.854 "supported_io_types": { 00:18:22.854 "read": true, 00:18:22.854 "write": true, 00:18:22.854 "unmap": true, 00:18:22.854 "flush": true, 00:18:22.854 "reset": true, 00:18:22.854 "nvme_admin": false, 00:18:22.854 "nvme_io": false, 00:18:22.854 "nvme_io_md": false, 00:18:22.854 "write_zeroes": true, 00:18:22.854 "zcopy": true, 00:18:22.854 "get_zone_info": false, 00:18:22.854 "zone_management": false, 00:18:22.854 "zone_append": false, 00:18:22.854 "compare": false, 00:18:22.854 "compare_and_write": false, 00:18:22.854 "abort": true, 00:18:22.854 "seek_hole": false, 00:18:22.854 "seek_data": false, 00:18:22.854 "copy": true, 00:18:22.854 "nvme_iov_md": false 00:18:22.854 }, 00:18:22.854 "memory_domains": [ 00:18:22.854 { 00:18:22.854 "dma_device_id": "system", 00:18:22.854 "dma_device_type": 1 00:18:22.854 }, 00:18:22.854 { 00:18:22.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.854 "dma_device_type": 2 00:18:22.854 } 00:18:22.854 ], 00:18:22.854 "driver_specific": {} 00:18:22.854 } 00:18:22.854 ] 00:18:22.854 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.855 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:22.855 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:22.855 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:22.855 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:22.855 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.855 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.112 BaseBdev3 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.112 [ 00:18:23.112 { 00:18:23.112 "name": "BaseBdev3", 00:18:23.112 "aliases": [ 00:18:23.112 "3174ee8f-12b8-4978-8c0f-d4f4e9d91188" 00:18:23.112 ], 00:18:23.112 "product_name": "Malloc disk", 00:18:23.112 "block_size": 512, 00:18:23.112 "num_blocks": 65536, 00:18:23.112 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:23.112 "assigned_rate_limits": { 00:18:23.112 "rw_ios_per_sec": 0, 00:18:23.112 "rw_mbytes_per_sec": 0, 00:18:23.112 "r_mbytes_per_sec": 0, 00:18:23.112 "w_mbytes_per_sec": 0 00:18:23.112 }, 00:18:23.112 "claimed": false, 00:18:23.112 "zoned": false, 00:18:23.112 "supported_io_types": { 00:18:23.112 "read": true, 00:18:23.112 "write": true, 00:18:23.112 "unmap": true, 00:18:23.112 "flush": true, 00:18:23.112 "reset": true, 00:18:23.112 "nvme_admin": false, 00:18:23.112 "nvme_io": false, 00:18:23.112 "nvme_io_md": false, 00:18:23.112 "write_zeroes": true, 00:18:23.112 "zcopy": true, 00:18:23.112 "get_zone_info": false, 00:18:23.112 "zone_management": false, 00:18:23.112 "zone_append": false, 00:18:23.112 "compare": false, 00:18:23.112 "compare_and_write": false, 00:18:23.112 "abort": true, 00:18:23.112 "seek_hole": false, 00:18:23.112 "seek_data": false, 00:18:23.112 "copy": true, 00:18:23.112 "nvme_iov_md": false 00:18:23.112 }, 00:18:23.112 "memory_domains": [ 00:18:23.112 { 00:18:23.112 "dma_device_id": "system", 00:18:23.112 "dma_device_type": 1 00:18:23.112 }, 00:18:23.112 { 00:18:23.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.112 "dma_device_type": 2 00:18:23.112 } 00:18:23.112 ], 00:18:23.112 "driver_specific": {} 00:18:23.112 } 00:18:23.112 ] 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.112 [2024-11-20 05:28:54.727332] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.112 [2024-11-20 05:28:54.727478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.112 [2024-11-20 05:28:54.727541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.112 [2024-11-20 05:28:54.729329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.112 "name": "Existed_Raid", 00:18:23.112 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:23.112 "strip_size_kb": 0, 00:18:23.112 "state": "configuring", 00:18:23.112 "raid_level": "raid1", 00:18:23.112 "superblock": true, 00:18:23.112 "num_base_bdevs": 3, 00:18:23.112 "num_base_bdevs_discovered": 2, 00:18:23.112 "num_base_bdevs_operational": 3, 00:18:23.112 "base_bdevs_list": [ 00:18:23.112 { 00:18:23.112 "name": "BaseBdev1", 00:18:23.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.112 "is_configured": false, 00:18:23.112 "data_offset": 0, 00:18:23.112 "data_size": 0 00:18:23.112 }, 00:18:23.112 { 00:18:23.112 "name": "BaseBdev2", 00:18:23.112 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:23.112 "is_configured": true, 00:18:23.112 "data_offset": 2048, 00:18:23.112 "data_size": 63488 00:18:23.112 }, 00:18:23.112 { 00:18:23.112 "name": "BaseBdev3", 00:18:23.112 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:23.112 "is_configured": true, 00:18:23.112 "data_offset": 2048, 00:18:23.112 "data_size": 63488 00:18:23.112 } 00:18:23.112 ] 00:18:23.112 }' 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.112 05:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.371 [2024-11-20 05:28:55.059475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.371 "name": "Existed_Raid", 00:18:23.371 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:23.371 "strip_size_kb": 0, 00:18:23.371 "state": "configuring", 00:18:23.371 "raid_level": "raid1", 00:18:23.371 "superblock": true, 00:18:23.371 "num_base_bdevs": 3, 00:18:23.371 "num_base_bdevs_discovered": 1, 00:18:23.371 "num_base_bdevs_operational": 3, 00:18:23.371 "base_bdevs_list": [ 00:18:23.371 { 00:18:23.371 "name": "BaseBdev1", 00:18:23.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.371 "is_configured": false, 00:18:23.371 "data_offset": 0, 00:18:23.371 "data_size": 0 00:18:23.371 }, 00:18:23.371 { 00:18:23.371 "name": null, 00:18:23.371 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:23.371 "is_configured": false, 00:18:23.371 "data_offset": 0, 00:18:23.371 "data_size": 63488 00:18:23.371 }, 00:18:23.371 { 00:18:23.371 "name": "BaseBdev3", 00:18:23.371 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:23.371 "is_configured": true, 00:18:23.371 "data_offset": 2048, 00:18:23.371 "data_size": 63488 00:18:23.371 } 00:18:23.371 ] 00:18:23.371 }' 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.371 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.629 [2024-11-20 05:28:55.433014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.629 BaseBdev1 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.629 [ 00:18:23.629 { 00:18:23.629 "name": "BaseBdev1", 00:18:23.629 "aliases": [ 00:18:23.629 "9aae70fb-9df5-48b4-8bf3-329a91e43ad6" 00:18:23.629 ], 00:18:23.629 "product_name": "Malloc disk", 00:18:23.629 "block_size": 512, 00:18:23.629 "num_blocks": 65536, 00:18:23.629 "uuid": "9aae70fb-9df5-48b4-8bf3-329a91e43ad6", 00:18:23.629 "assigned_rate_limits": { 00:18:23.629 "rw_ios_per_sec": 0, 00:18:23.629 "rw_mbytes_per_sec": 0, 00:18:23.629 "r_mbytes_per_sec": 0, 00:18:23.629 "w_mbytes_per_sec": 0 00:18:23.629 }, 00:18:23.629 "claimed": true, 00:18:23.629 "claim_type": "exclusive_write", 00:18:23.629 "zoned": false, 00:18:23.629 "supported_io_types": { 00:18:23.629 "read": true, 00:18:23.629 "write": true, 00:18:23.629 "unmap": true, 00:18:23.629 "flush": true, 00:18:23.629 "reset": true, 00:18:23.629 "nvme_admin": false, 00:18:23.629 "nvme_io": false, 00:18:23.629 "nvme_io_md": false, 00:18:23.629 "write_zeroes": true, 00:18:23.629 "zcopy": true, 00:18:23.629 "get_zone_info": false, 00:18:23.629 "zone_management": false, 00:18:23.629 "zone_append": false, 00:18:23.629 "compare": false, 00:18:23.629 "compare_and_write": false, 00:18:23.629 "abort": true, 00:18:23.629 "seek_hole": false, 00:18:23.629 "seek_data": false, 00:18:23.629 "copy": true, 00:18:23.629 "nvme_iov_md": false 00:18:23.629 }, 00:18:23.629 "memory_domains": [ 00:18:23.629 { 00:18:23.629 "dma_device_id": "system", 00:18:23.629 "dma_device_type": 1 00:18:23.629 }, 00:18:23.629 { 00:18:23.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.629 "dma_device_type": 2 00:18:23.629 } 00:18:23.629 ], 00:18:23.629 "driver_specific": {} 00:18:23.629 } 00:18:23.629 ] 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:23.629 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.630 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.630 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.630 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.630 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.630 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.630 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.630 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.887 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.887 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.887 "name": "Existed_Raid", 00:18:23.887 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:23.887 "strip_size_kb": 0, 00:18:23.887 "state": "configuring", 00:18:23.887 "raid_level": "raid1", 00:18:23.887 "superblock": true, 00:18:23.888 "num_base_bdevs": 3, 00:18:23.888 "num_base_bdevs_discovered": 2, 00:18:23.888 "num_base_bdevs_operational": 3, 00:18:23.888 "base_bdevs_list": [ 00:18:23.888 { 00:18:23.888 "name": "BaseBdev1", 00:18:23.888 "uuid": "9aae70fb-9df5-48b4-8bf3-329a91e43ad6", 00:18:23.888 "is_configured": true, 00:18:23.888 "data_offset": 2048, 00:18:23.888 "data_size": 63488 00:18:23.888 }, 00:18:23.888 { 00:18:23.888 "name": null, 00:18:23.888 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:23.888 "is_configured": false, 00:18:23.888 "data_offset": 0, 00:18:23.888 "data_size": 63488 00:18:23.888 }, 00:18:23.888 { 00:18:23.888 "name": "BaseBdev3", 00:18:23.888 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:23.888 "is_configured": true, 00:18:23.888 "data_offset": 2048, 00:18:23.888 "data_size": 63488 00:18:23.888 } 00:18:23.888 ] 00:18:23.888 }' 00:18:23.888 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.888 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.145 [2024-11-20 05:28:55.809145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.145 "name": "Existed_Raid", 00:18:24.145 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:24.145 "strip_size_kb": 0, 00:18:24.145 "state": "configuring", 00:18:24.145 "raid_level": "raid1", 00:18:24.145 "superblock": true, 00:18:24.145 "num_base_bdevs": 3, 00:18:24.145 "num_base_bdevs_discovered": 1, 00:18:24.145 "num_base_bdevs_operational": 3, 00:18:24.145 "base_bdevs_list": [ 00:18:24.145 { 00:18:24.145 "name": "BaseBdev1", 00:18:24.145 "uuid": "9aae70fb-9df5-48b4-8bf3-329a91e43ad6", 00:18:24.145 "is_configured": true, 00:18:24.145 "data_offset": 2048, 00:18:24.145 "data_size": 63488 00:18:24.145 }, 00:18:24.145 { 00:18:24.145 "name": null, 00:18:24.145 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:24.145 "is_configured": false, 00:18:24.145 "data_offset": 0, 00:18:24.145 "data_size": 63488 00:18:24.145 }, 00:18:24.145 { 00:18:24.145 "name": null, 00:18:24.145 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:24.145 "is_configured": false, 00:18:24.145 "data_offset": 0, 00:18:24.145 "data_size": 63488 00:18:24.145 } 00:18:24.145 ] 00:18:24.145 }' 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.145 05:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 [2024-11-20 05:28:56.169239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.404 "name": "Existed_Raid", 00:18:24.404 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:24.404 "strip_size_kb": 0, 00:18:24.404 "state": "configuring", 00:18:24.404 "raid_level": "raid1", 00:18:24.404 "superblock": true, 00:18:24.404 "num_base_bdevs": 3, 00:18:24.404 "num_base_bdevs_discovered": 2, 00:18:24.404 "num_base_bdevs_operational": 3, 00:18:24.404 "base_bdevs_list": [ 00:18:24.404 { 00:18:24.404 "name": "BaseBdev1", 00:18:24.404 "uuid": "9aae70fb-9df5-48b4-8bf3-329a91e43ad6", 00:18:24.404 "is_configured": true, 00:18:24.404 "data_offset": 2048, 00:18:24.404 "data_size": 63488 00:18:24.404 }, 00:18:24.404 { 00:18:24.404 "name": null, 00:18:24.404 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:24.404 "is_configured": false, 00:18:24.404 "data_offset": 0, 00:18:24.404 "data_size": 63488 00:18:24.404 }, 00:18:24.404 { 00:18:24.404 "name": "BaseBdev3", 00:18:24.404 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:24.404 "is_configured": true, 00:18:24.404 "data_offset": 2048, 00:18:24.404 "data_size": 63488 00:18:24.404 } 00:18:24.404 ] 00:18:24.404 }' 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.404 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.970 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:24.970 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.970 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.970 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.970 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.970 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:24.970 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:24.970 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.970 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.971 [2024-11-20 05:28:56.525316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.971 "name": "Existed_Raid", 00:18:24.971 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:24.971 "strip_size_kb": 0, 00:18:24.971 "state": "configuring", 00:18:24.971 "raid_level": "raid1", 00:18:24.971 "superblock": true, 00:18:24.971 "num_base_bdevs": 3, 00:18:24.971 "num_base_bdevs_discovered": 1, 00:18:24.971 "num_base_bdevs_operational": 3, 00:18:24.971 "base_bdevs_list": [ 00:18:24.971 { 00:18:24.971 "name": null, 00:18:24.971 "uuid": "9aae70fb-9df5-48b4-8bf3-329a91e43ad6", 00:18:24.971 "is_configured": false, 00:18:24.971 "data_offset": 0, 00:18:24.971 "data_size": 63488 00:18:24.971 }, 00:18:24.971 { 00:18:24.971 "name": null, 00:18:24.971 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:24.971 "is_configured": false, 00:18:24.971 "data_offset": 0, 00:18:24.971 "data_size": 63488 00:18:24.971 }, 00:18:24.971 { 00:18:24.971 "name": "BaseBdev3", 00:18:24.971 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:24.971 "is_configured": true, 00:18:24.971 "data_offset": 2048, 00:18:24.971 "data_size": 63488 00:18:24.971 } 00:18:24.971 ] 00:18:24.971 }' 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.971 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.228 [2024-11-20 05:28:56.942916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.228 "name": "Existed_Raid", 00:18:25.228 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:25.228 "strip_size_kb": 0, 00:18:25.228 "state": "configuring", 00:18:25.228 "raid_level": "raid1", 00:18:25.228 "superblock": true, 00:18:25.228 "num_base_bdevs": 3, 00:18:25.228 "num_base_bdevs_discovered": 2, 00:18:25.228 "num_base_bdevs_operational": 3, 00:18:25.228 "base_bdevs_list": [ 00:18:25.228 { 00:18:25.228 "name": null, 00:18:25.228 "uuid": "9aae70fb-9df5-48b4-8bf3-329a91e43ad6", 00:18:25.228 "is_configured": false, 00:18:25.228 "data_offset": 0, 00:18:25.228 "data_size": 63488 00:18:25.228 }, 00:18:25.228 { 00:18:25.228 "name": "BaseBdev2", 00:18:25.228 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:25.228 "is_configured": true, 00:18:25.228 "data_offset": 2048, 00:18:25.228 "data_size": 63488 00:18:25.228 }, 00:18:25.228 { 00:18:25.228 "name": "BaseBdev3", 00:18:25.228 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:25.228 "is_configured": true, 00:18:25.228 "data_offset": 2048, 00:18:25.228 "data_size": 63488 00:18:25.228 } 00:18:25.228 ] 00:18:25.228 }' 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.228 05:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:25.485 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9aae70fb-9df5-48b4-8bf3-329a91e43ad6 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.742 [2024-11-20 05:28:57.347640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:25.742 [2024-11-20 05:28:57.347848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:25.742 [2024-11-20 05:28:57.347859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:25.742 [2024-11-20 05:28:57.348081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:25.742 [2024-11-20 05:28:57.348199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:25.742 NewBaseBdev 00:18:25.742 [2024-11-20 05:28:57.348209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:25.742 [2024-11-20 05:28:57.348314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.742 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.742 [ 00:18:25.742 { 00:18:25.742 "name": "NewBaseBdev", 00:18:25.742 "aliases": [ 00:18:25.742 "9aae70fb-9df5-48b4-8bf3-329a91e43ad6" 00:18:25.742 ], 00:18:25.742 "product_name": "Malloc disk", 00:18:25.742 "block_size": 512, 00:18:25.742 "num_blocks": 65536, 00:18:25.742 "uuid": "9aae70fb-9df5-48b4-8bf3-329a91e43ad6", 00:18:25.742 "assigned_rate_limits": { 00:18:25.742 "rw_ios_per_sec": 0, 00:18:25.743 "rw_mbytes_per_sec": 0, 00:18:25.743 "r_mbytes_per_sec": 0, 00:18:25.743 "w_mbytes_per_sec": 0 00:18:25.743 }, 00:18:25.743 "claimed": true, 00:18:25.743 "claim_type": "exclusive_write", 00:18:25.743 "zoned": false, 00:18:25.743 "supported_io_types": { 00:18:25.743 "read": true, 00:18:25.743 "write": true, 00:18:25.743 "unmap": true, 00:18:25.743 "flush": true, 00:18:25.743 "reset": true, 00:18:25.743 "nvme_admin": false, 00:18:25.743 "nvme_io": false, 00:18:25.743 "nvme_io_md": false, 00:18:25.743 "write_zeroes": true, 00:18:25.743 "zcopy": true, 00:18:25.743 "get_zone_info": false, 00:18:25.743 "zone_management": false, 00:18:25.743 "zone_append": false, 00:18:25.743 "compare": false, 00:18:25.743 "compare_and_write": false, 00:18:25.743 "abort": true, 00:18:25.743 "seek_hole": false, 00:18:25.743 "seek_data": false, 00:18:25.743 "copy": true, 00:18:25.743 "nvme_iov_md": false 00:18:25.743 }, 00:18:25.743 "memory_domains": [ 00:18:25.743 { 00:18:25.743 "dma_device_id": "system", 00:18:25.743 "dma_device_type": 1 00:18:25.743 }, 00:18:25.743 { 00:18:25.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.743 "dma_device_type": 2 00:18:25.743 } 00:18:25.743 ], 00:18:25.743 "driver_specific": {} 00:18:25.743 } 00:18:25.743 ] 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.743 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.744 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.744 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.744 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.744 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.744 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.744 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.744 "name": "Existed_Raid", 00:18:25.744 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:25.744 "strip_size_kb": 0, 00:18:25.744 "state": "online", 00:18:25.744 "raid_level": "raid1", 00:18:25.744 "superblock": true, 00:18:25.744 "num_base_bdevs": 3, 00:18:25.744 "num_base_bdevs_discovered": 3, 00:18:25.744 "num_base_bdevs_operational": 3, 00:18:25.744 "base_bdevs_list": [ 00:18:25.744 { 00:18:25.744 "name": "NewBaseBdev", 00:18:25.744 "uuid": "9aae70fb-9df5-48b4-8bf3-329a91e43ad6", 00:18:25.744 "is_configured": true, 00:18:25.744 "data_offset": 2048, 00:18:25.744 "data_size": 63488 00:18:25.744 }, 00:18:25.744 { 00:18:25.744 "name": "BaseBdev2", 00:18:25.744 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:25.744 "is_configured": true, 00:18:25.744 "data_offset": 2048, 00:18:25.744 "data_size": 63488 00:18:25.744 }, 00:18:25.744 { 00:18:25.744 "name": "BaseBdev3", 00:18:25.744 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:25.744 "is_configured": true, 00:18:25.744 "data_offset": 2048, 00:18:25.744 "data_size": 63488 00:18:25.744 } 00:18:25.744 ] 00:18:25.744 }' 00:18:25.744 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.744 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:26.005 [2024-11-20 05:28:57.684077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.005 "name": "Existed_Raid", 00:18:26.005 "aliases": [ 00:18:26.005 "7bd5040a-5f05-4b92-a81b-fe6659578886" 00:18:26.005 ], 00:18:26.005 "product_name": "Raid Volume", 00:18:26.005 "block_size": 512, 00:18:26.005 "num_blocks": 63488, 00:18:26.005 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:26.005 "assigned_rate_limits": { 00:18:26.005 "rw_ios_per_sec": 0, 00:18:26.005 "rw_mbytes_per_sec": 0, 00:18:26.005 "r_mbytes_per_sec": 0, 00:18:26.005 "w_mbytes_per_sec": 0 00:18:26.005 }, 00:18:26.005 "claimed": false, 00:18:26.005 "zoned": false, 00:18:26.005 "supported_io_types": { 00:18:26.005 "read": true, 00:18:26.005 "write": true, 00:18:26.005 "unmap": false, 00:18:26.005 "flush": false, 00:18:26.005 "reset": true, 00:18:26.005 "nvme_admin": false, 00:18:26.005 "nvme_io": false, 00:18:26.005 "nvme_io_md": false, 00:18:26.005 "write_zeroes": true, 00:18:26.005 "zcopy": false, 00:18:26.005 "get_zone_info": false, 00:18:26.005 "zone_management": false, 00:18:26.005 "zone_append": false, 00:18:26.005 "compare": false, 00:18:26.005 "compare_and_write": false, 00:18:26.005 "abort": false, 00:18:26.005 "seek_hole": false, 00:18:26.005 "seek_data": false, 00:18:26.005 "copy": false, 00:18:26.005 "nvme_iov_md": false 00:18:26.005 }, 00:18:26.005 "memory_domains": [ 00:18:26.005 { 00:18:26.005 "dma_device_id": "system", 00:18:26.005 "dma_device_type": 1 00:18:26.005 }, 00:18:26.005 { 00:18:26.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.005 "dma_device_type": 2 00:18:26.005 }, 00:18:26.005 { 00:18:26.005 "dma_device_id": "system", 00:18:26.005 "dma_device_type": 1 00:18:26.005 }, 00:18:26.005 { 00:18:26.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.005 "dma_device_type": 2 00:18:26.005 }, 00:18:26.005 { 00:18:26.005 "dma_device_id": "system", 00:18:26.005 "dma_device_type": 1 00:18:26.005 }, 00:18:26.005 { 00:18:26.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.005 "dma_device_type": 2 00:18:26.005 } 00:18:26.005 ], 00:18:26.005 "driver_specific": { 00:18:26.005 "raid": { 00:18:26.005 "uuid": "7bd5040a-5f05-4b92-a81b-fe6659578886", 00:18:26.005 "strip_size_kb": 0, 00:18:26.005 "state": "online", 00:18:26.005 "raid_level": "raid1", 00:18:26.005 "superblock": true, 00:18:26.005 "num_base_bdevs": 3, 00:18:26.005 "num_base_bdevs_discovered": 3, 00:18:26.005 "num_base_bdevs_operational": 3, 00:18:26.005 "base_bdevs_list": [ 00:18:26.005 { 00:18:26.005 "name": "NewBaseBdev", 00:18:26.005 "uuid": "9aae70fb-9df5-48b4-8bf3-329a91e43ad6", 00:18:26.005 "is_configured": true, 00:18:26.005 "data_offset": 2048, 00:18:26.005 "data_size": 63488 00:18:26.005 }, 00:18:26.005 { 00:18:26.005 "name": "BaseBdev2", 00:18:26.005 "uuid": "53ab3d58-ea67-4f25-b56c-4d0da2f0ead6", 00:18:26.005 "is_configured": true, 00:18:26.005 "data_offset": 2048, 00:18:26.005 "data_size": 63488 00:18:26.005 }, 00:18:26.005 { 00:18:26.005 "name": "BaseBdev3", 00:18:26.005 "uuid": "3174ee8f-12b8-4978-8c0f-d4f4e9d91188", 00:18:26.005 "is_configured": true, 00:18:26.005 "data_offset": 2048, 00:18:26.005 "data_size": 63488 00:18:26.005 } 00:18:26.005 ] 00:18:26.005 } 00:18:26.005 } 00:18:26.005 }' 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:26.005 BaseBdev2 00:18:26.005 BaseBdev3' 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.005 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.264 [2024-11-20 05:28:57.903806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.264 [2024-11-20 05:28:57.903839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.264 [2024-11-20 05:28:57.903911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.264 [2024-11-20 05:28:57.904150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.264 [2024-11-20 05:28:57.904165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66440 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66440 ']' 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66440 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66440 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:26.264 killing process with pid 66440 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66440' 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66440 00:18:26.264 [2024-11-20 05:28:57.938003] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.264 05:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66440 00:18:26.264 [2024-11-20 05:28:58.093481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.200 05:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:27.200 00:18:27.200 real 0m7.552s 00:18:27.200 user 0m12.175s 00:18:27.200 sys 0m1.207s 00:18:27.200 05:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:27.200 ************************************ 00:18:27.200 END TEST raid_state_function_test_sb 00:18:27.200 ************************************ 00:18:27.200 05:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.200 05:28:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:18:27.200 05:28:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:27.200 05:28:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:27.200 05:28:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.200 ************************************ 00:18:27.200 START TEST raid_superblock_test 00:18:27.200 ************************************ 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67032 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67032 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67032 ']' 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:27.200 05:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.200 [2024-11-20 05:28:58.807774] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:27.200 [2024-11-20 05:28:58.807899] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67032 ] 00:18:27.200 [2024-11-20 05:28:58.964417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.461 [2024-11-20 05:28:59.077907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.461 [2024-11-20 05:28:59.226323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.462 [2024-11-20 05:28:59.226373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.042 malloc1 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.042 [2024-11-20 05:28:59.709064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.042 [2024-11-20 05:28:59.709129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.042 [2024-11-20 05:28:59.709152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:28.042 [2024-11-20 05:28:59.709162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.042 [2024-11-20 05:28:59.711449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.042 [2024-11-20 05:28:59.711483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.042 pt1 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.042 malloc2 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.042 [2024-11-20 05:28:59.751126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.042 [2024-11-20 05:28:59.751183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.042 [2024-11-20 05:28:59.751207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:28.042 [2024-11-20 05:28:59.751215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.042 [2024-11-20 05:28:59.753516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.042 [2024-11-20 05:28:59.753549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.042 pt2 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.042 malloc3 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.042 [2024-11-20 05:28:59.810103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:28.042 [2024-11-20 05:28:59.810164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.042 [2024-11-20 05:28:59.810188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:28.042 [2024-11-20 05:28:59.810198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.042 [2024-11-20 05:28:59.812525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.042 [2024-11-20 05:28:59.812560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:28.042 pt3 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.042 [2024-11-20 05:28:59.818149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:28.042 [2024-11-20 05:28:59.820169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.042 [2024-11-20 05:28:59.820241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:28.042 [2024-11-20 05:28:59.820418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:28.042 [2024-11-20 05:28:59.820436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:28.042 [2024-11-20 05:28:59.820694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:28.042 [2024-11-20 05:28:59.820874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:28.042 [2024-11-20 05:28:59.820886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:28.042 [2024-11-20 05:28:59.821032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.042 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.043 "name": "raid_bdev1", 00:18:28.043 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:28.043 "strip_size_kb": 0, 00:18:28.043 "state": "online", 00:18:28.043 "raid_level": "raid1", 00:18:28.043 "superblock": true, 00:18:28.043 "num_base_bdevs": 3, 00:18:28.043 "num_base_bdevs_discovered": 3, 00:18:28.043 "num_base_bdevs_operational": 3, 00:18:28.043 "base_bdevs_list": [ 00:18:28.043 { 00:18:28.043 "name": "pt1", 00:18:28.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.043 "is_configured": true, 00:18:28.043 "data_offset": 2048, 00:18:28.043 "data_size": 63488 00:18:28.043 }, 00:18:28.043 { 00:18:28.043 "name": "pt2", 00:18:28.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.043 "is_configured": true, 00:18:28.043 "data_offset": 2048, 00:18:28.043 "data_size": 63488 00:18:28.043 }, 00:18:28.043 { 00:18:28.043 "name": "pt3", 00:18:28.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.043 "is_configured": true, 00:18:28.043 "data_offset": 2048, 00:18:28.043 "data_size": 63488 00:18:28.043 } 00:18:28.043 ] 00:18:28.043 }' 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.043 05:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.303 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.303 [2024-11-20 05:29:00.134555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.564 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.564 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:28.564 "name": "raid_bdev1", 00:18:28.564 "aliases": [ 00:18:28.564 "1d897743-1b9f-431b-bf04-0c25b2ab271c" 00:18:28.564 ], 00:18:28.564 "product_name": "Raid Volume", 00:18:28.564 "block_size": 512, 00:18:28.564 "num_blocks": 63488, 00:18:28.564 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:28.564 "assigned_rate_limits": { 00:18:28.564 "rw_ios_per_sec": 0, 00:18:28.564 "rw_mbytes_per_sec": 0, 00:18:28.564 "r_mbytes_per_sec": 0, 00:18:28.564 "w_mbytes_per_sec": 0 00:18:28.564 }, 00:18:28.564 "claimed": false, 00:18:28.564 "zoned": false, 00:18:28.564 "supported_io_types": { 00:18:28.564 "read": true, 00:18:28.564 "write": true, 00:18:28.564 "unmap": false, 00:18:28.564 "flush": false, 00:18:28.564 "reset": true, 00:18:28.564 "nvme_admin": false, 00:18:28.564 "nvme_io": false, 00:18:28.564 "nvme_io_md": false, 00:18:28.564 "write_zeroes": true, 00:18:28.564 "zcopy": false, 00:18:28.564 "get_zone_info": false, 00:18:28.564 "zone_management": false, 00:18:28.564 "zone_append": false, 00:18:28.564 "compare": false, 00:18:28.564 "compare_and_write": false, 00:18:28.564 "abort": false, 00:18:28.564 "seek_hole": false, 00:18:28.564 "seek_data": false, 00:18:28.564 "copy": false, 00:18:28.564 "nvme_iov_md": false 00:18:28.564 }, 00:18:28.564 "memory_domains": [ 00:18:28.564 { 00:18:28.564 "dma_device_id": "system", 00:18:28.564 "dma_device_type": 1 00:18:28.564 }, 00:18:28.564 { 00:18:28.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.564 "dma_device_type": 2 00:18:28.564 }, 00:18:28.564 { 00:18:28.564 "dma_device_id": "system", 00:18:28.564 "dma_device_type": 1 00:18:28.564 }, 00:18:28.564 { 00:18:28.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.565 "dma_device_type": 2 00:18:28.565 }, 00:18:28.565 { 00:18:28.565 "dma_device_id": "system", 00:18:28.565 "dma_device_type": 1 00:18:28.565 }, 00:18:28.565 { 00:18:28.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.565 "dma_device_type": 2 00:18:28.565 } 00:18:28.565 ], 00:18:28.565 "driver_specific": { 00:18:28.565 "raid": { 00:18:28.565 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:28.565 "strip_size_kb": 0, 00:18:28.565 "state": "online", 00:18:28.565 "raid_level": "raid1", 00:18:28.565 "superblock": true, 00:18:28.565 "num_base_bdevs": 3, 00:18:28.565 "num_base_bdevs_discovered": 3, 00:18:28.565 "num_base_bdevs_operational": 3, 00:18:28.565 "base_bdevs_list": [ 00:18:28.565 { 00:18:28.565 "name": "pt1", 00:18:28.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.565 "is_configured": true, 00:18:28.565 "data_offset": 2048, 00:18:28.565 "data_size": 63488 00:18:28.565 }, 00:18:28.565 { 00:18:28.565 "name": "pt2", 00:18:28.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.565 "is_configured": true, 00:18:28.565 "data_offset": 2048, 00:18:28.565 "data_size": 63488 00:18:28.565 }, 00:18:28.565 { 00:18:28.565 "name": "pt3", 00:18:28.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.565 "is_configured": true, 00:18:28.565 "data_offset": 2048, 00:18:28.565 "data_size": 63488 00:18:28.565 } 00:18:28.565 ] 00:18:28.565 } 00:18:28.565 } 00:18:28.565 }' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:28.565 pt2 00:18:28.565 pt3' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:28.565 [2024-11-20 05:29:00.346546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1d897743-1b9f-431b-bf04-0c25b2ab271c 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1d897743-1b9f-431b-bf04-0c25b2ab271c ']' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.565 [2024-11-20 05:29:00.378210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.565 [2024-11-20 05:29:00.378240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.565 [2024-11-20 05:29:00.378318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.565 [2024-11-20 05:29:00.378415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.565 [2024-11-20 05:29:00.378427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:28.565 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.828 [2024-11-20 05:29:00.482281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:28.828 [2024-11-20 05:29:00.484311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:28.828 [2024-11-20 05:29:00.484378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:28.828 [2024-11-20 05:29:00.484432] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:28.828 [2024-11-20 05:29:00.484539] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:28.828 [2024-11-20 05:29:00.484565] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:28.828 [2024-11-20 05:29:00.484582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.828 [2024-11-20 05:29:00.484593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:28.828 request: 00:18:28.828 { 00:18:28.828 "name": "raid_bdev1", 00:18:28.828 "raid_level": "raid1", 00:18:28.828 "base_bdevs": [ 00:18:28.828 "malloc1", 00:18:28.828 "malloc2", 00:18:28.828 "malloc3" 00:18:28.828 ], 00:18:28.828 "superblock": false, 00:18:28.828 "method": "bdev_raid_create", 00:18:28.828 "req_id": 1 00:18:28.828 } 00:18:28.828 Got JSON-RPC error response 00:18:28.828 response: 00:18:28.828 { 00:18:28.828 "code": -17, 00:18:28.828 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:28.828 } 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.828 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.829 [2024-11-20 05:29:00.522270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.829 [2024-11-20 05:29:00.522333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.829 [2024-11-20 05:29:00.522357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:28.829 [2024-11-20 05:29:00.522380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.829 [2024-11-20 05:29:00.524800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.829 [2024-11-20 05:29:00.524836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.829 [2024-11-20 05:29:00.524925] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:28.829 [2024-11-20 05:29:00.525009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:28.829 pt1 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.829 "name": "raid_bdev1", 00:18:28.829 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:28.829 "strip_size_kb": 0, 00:18:28.829 "state": "configuring", 00:18:28.829 "raid_level": "raid1", 00:18:28.829 "superblock": true, 00:18:28.829 "num_base_bdevs": 3, 00:18:28.829 "num_base_bdevs_discovered": 1, 00:18:28.829 "num_base_bdevs_operational": 3, 00:18:28.829 "base_bdevs_list": [ 00:18:28.829 { 00:18:28.829 "name": "pt1", 00:18:28.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.829 "is_configured": true, 00:18:28.829 "data_offset": 2048, 00:18:28.829 "data_size": 63488 00:18:28.829 }, 00:18:28.829 { 00:18:28.829 "name": null, 00:18:28.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.829 "is_configured": false, 00:18:28.829 "data_offset": 2048, 00:18:28.829 "data_size": 63488 00:18:28.829 }, 00:18:28.829 { 00:18:28.829 "name": null, 00:18:28.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.829 "is_configured": false, 00:18:28.829 "data_offset": 2048, 00:18:28.829 "data_size": 63488 00:18:28.829 } 00:18:28.829 ] 00:18:28.829 }' 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.829 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.119 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.120 [2024-11-20 05:29:00.838396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:29.120 [2024-11-20 05:29:00.838473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.120 [2024-11-20 05:29:00.838498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:29.120 [2024-11-20 05:29:00.838509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.120 [2024-11-20 05:29:00.838991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.120 [2024-11-20 05:29:00.839014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:29.120 [2024-11-20 05:29:00.839104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:29.120 [2024-11-20 05:29:00.839134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.120 pt2 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.120 [2024-11-20 05:29:00.846379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.120 "name": "raid_bdev1", 00:18:29.120 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:29.120 "strip_size_kb": 0, 00:18:29.120 "state": "configuring", 00:18:29.120 "raid_level": "raid1", 00:18:29.120 "superblock": true, 00:18:29.120 "num_base_bdevs": 3, 00:18:29.120 "num_base_bdevs_discovered": 1, 00:18:29.120 "num_base_bdevs_operational": 3, 00:18:29.120 "base_bdevs_list": [ 00:18:29.120 { 00:18:29.120 "name": "pt1", 00:18:29.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.120 "is_configured": true, 00:18:29.120 "data_offset": 2048, 00:18:29.120 "data_size": 63488 00:18:29.120 }, 00:18:29.120 { 00:18:29.120 "name": null, 00:18:29.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.120 "is_configured": false, 00:18:29.120 "data_offset": 0, 00:18:29.120 "data_size": 63488 00:18:29.120 }, 00:18:29.120 { 00:18:29.120 "name": null, 00:18:29.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.120 "is_configured": false, 00:18:29.120 "data_offset": 2048, 00:18:29.120 "data_size": 63488 00:18:29.120 } 00:18:29.120 ] 00:18:29.120 }' 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.120 05:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.381 [2024-11-20 05:29:01.186459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:29.381 [2024-11-20 05:29:01.186535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.381 [2024-11-20 05:29:01.186555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:29.381 [2024-11-20 05:29:01.186566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.381 [2024-11-20 05:29:01.187057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.381 [2024-11-20 05:29:01.187087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:29.381 [2024-11-20 05:29:01.187174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:29.381 [2024-11-20 05:29:01.187211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.381 pt2 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.381 [2024-11-20 05:29:01.194416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:29.381 [2024-11-20 05:29:01.194463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.381 [2024-11-20 05:29:01.194481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:29.381 [2024-11-20 05:29:01.194492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.381 [2024-11-20 05:29:01.194877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.381 [2024-11-20 05:29:01.194907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:29.381 [2024-11-20 05:29:01.194966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:29.381 [2024-11-20 05:29:01.194986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:29.381 [2024-11-20 05:29:01.195106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:29.381 [2024-11-20 05:29:01.195119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:29.381 [2024-11-20 05:29:01.195356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:29.381 [2024-11-20 05:29:01.195526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:29.381 [2024-11-20 05:29:01.195546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:29.381 [2024-11-20 05:29:01.195682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.381 pt3 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.381 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.382 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.382 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.382 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.382 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.382 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.382 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.642 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.642 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.642 "name": "raid_bdev1", 00:18:29.642 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:29.642 "strip_size_kb": 0, 00:18:29.642 "state": "online", 00:18:29.642 "raid_level": "raid1", 00:18:29.642 "superblock": true, 00:18:29.642 "num_base_bdevs": 3, 00:18:29.642 "num_base_bdevs_discovered": 3, 00:18:29.642 "num_base_bdevs_operational": 3, 00:18:29.642 "base_bdevs_list": [ 00:18:29.642 { 00:18:29.642 "name": "pt1", 00:18:29.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.642 "is_configured": true, 00:18:29.642 "data_offset": 2048, 00:18:29.642 "data_size": 63488 00:18:29.642 }, 00:18:29.642 { 00:18:29.642 "name": "pt2", 00:18:29.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.642 "is_configured": true, 00:18:29.642 "data_offset": 2048, 00:18:29.642 "data_size": 63488 00:18:29.642 }, 00:18:29.642 { 00:18:29.642 "name": "pt3", 00:18:29.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.642 "is_configured": true, 00:18:29.642 "data_offset": 2048, 00:18:29.642 "data_size": 63488 00:18:29.642 } 00:18:29.642 ] 00:18:29.642 }' 00:18:29.642 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.642 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.904 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:29.904 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:29.904 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:29.904 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:29.904 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:29.904 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:29.904 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:29.904 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.905 [2024-11-20 05:29:01.514895] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.905 "name": "raid_bdev1", 00:18:29.905 "aliases": [ 00:18:29.905 "1d897743-1b9f-431b-bf04-0c25b2ab271c" 00:18:29.905 ], 00:18:29.905 "product_name": "Raid Volume", 00:18:29.905 "block_size": 512, 00:18:29.905 "num_blocks": 63488, 00:18:29.905 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:29.905 "assigned_rate_limits": { 00:18:29.905 "rw_ios_per_sec": 0, 00:18:29.905 "rw_mbytes_per_sec": 0, 00:18:29.905 "r_mbytes_per_sec": 0, 00:18:29.905 "w_mbytes_per_sec": 0 00:18:29.905 }, 00:18:29.905 "claimed": false, 00:18:29.905 "zoned": false, 00:18:29.905 "supported_io_types": { 00:18:29.905 "read": true, 00:18:29.905 "write": true, 00:18:29.905 "unmap": false, 00:18:29.905 "flush": false, 00:18:29.905 "reset": true, 00:18:29.905 "nvme_admin": false, 00:18:29.905 "nvme_io": false, 00:18:29.905 "nvme_io_md": false, 00:18:29.905 "write_zeroes": true, 00:18:29.905 "zcopy": false, 00:18:29.905 "get_zone_info": false, 00:18:29.905 "zone_management": false, 00:18:29.905 "zone_append": false, 00:18:29.905 "compare": false, 00:18:29.905 "compare_and_write": false, 00:18:29.905 "abort": false, 00:18:29.905 "seek_hole": false, 00:18:29.905 "seek_data": false, 00:18:29.905 "copy": false, 00:18:29.905 "nvme_iov_md": false 00:18:29.905 }, 00:18:29.905 "memory_domains": [ 00:18:29.905 { 00:18:29.905 "dma_device_id": "system", 00:18:29.905 "dma_device_type": 1 00:18:29.905 }, 00:18:29.905 { 00:18:29.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.905 "dma_device_type": 2 00:18:29.905 }, 00:18:29.905 { 00:18:29.905 "dma_device_id": "system", 00:18:29.905 "dma_device_type": 1 00:18:29.905 }, 00:18:29.905 { 00:18:29.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.905 "dma_device_type": 2 00:18:29.905 }, 00:18:29.905 { 00:18:29.905 "dma_device_id": "system", 00:18:29.905 "dma_device_type": 1 00:18:29.905 }, 00:18:29.905 { 00:18:29.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.905 "dma_device_type": 2 00:18:29.905 } 00:18:29.905 ], 00:18:29.905 "driver_specific": { 00:18:29.905 "raid": { 00:18:29.905 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:29.905 "strip_size_kb": 0, 00:18:29.905 "state": "online", 00:18:29.905 "raid_level": "raid1", 00:18:29.905 "superblock": true, 00:18:29.905 "num_base_bdevs": 3, 00:18:29.905 "num_base_bdevs_discovered": 3, 00:18:29.905 "num_base_bdevs_operational": 3, 00:18:29.905 "base_bdevs_list": [ 00:18:29.905 { 00:18:29.905 "name": "pt1", 00:18:29.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.905 "is_configured": true, 00:18:29.905 "data_offset": 2048, 00:18:29.905 "data_size": 63488 00:18:29.905 }, 00:18:29.905 { 00:18:29.905 "name": "pt2", 00:18:29.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.905 "is_configured": true, 00:18:29.905 "data_offset": 2048, 00:18:29.905 "data_size": 63488 00:18:29.905 }, 00:18:29.905 { 00:18:29.905 "name": "pt3", 00:18:29.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.905 "is_configured": true, 00:18:29.905 "data_offset": 2048, 00:18:29.905 "data_size": 63488 00:18:29.905 } 00:18:29.905 ] 00:18:29.905 } 00:18:29.905 } 00:18:29.905 }' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:29.905 pt2 00:18:29.905 pt3' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.905 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:29.905 [2024-11-20 05:29:01.722879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1d897743-1b9f-431b-bf04-0c25b2ab271c '!=' 1d897743-1b9f-431b-bf04-0c25b2ab271c ']' 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.167 [2024-11-20 05:29:01.754629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.167 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.167 "name": "raid_bdev1", 00:18:30.167 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:30.167 "strip_size_kb": 0, 00:18:30.167 "state": "online", 00:18:30.167 "raid_level": "raid1", 00:18:30.167 "superblock": true, 00:18:30.167 "num_base_bdevs": 3, 00:18:30.167 "num_base_bdevs_discovered": 2, 00:18:30.167 "num_base_bdevs_operational": 2, 00:18:30.167 "base_bdevs_list": [ 00:18:30.167 { 00:18:30.167 "name": null, 00:18:30.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.167 "is_configured": false, 00:18:30.167 "data_offset": 0, 00:18:30.167 "data_size": 63488 00:18:30.167 }, 00:18:30.167 { 00:18:30.167 "name": "pt2", 00:18:30.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.167 "is_configured": true, 00:18:30.168 "data_offset": 2048, 00:18:30.168 "data_size": 63488 00:18:30.168 }, 00:18:30.168 { 00:18:30.168 "name": "pt3", 00:18:30.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.168 "is_configured": true, 00:18:30.168 "data_offset": 2048, 00:18:30.168 "data_size": 63488 00:18:30.168 } 00:18:30.168 ] 00:18:30.168 }' 00:18:30.168 05:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.168 05:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.427 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.427 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.427 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.427 [2024-11-20 05:29:02.050664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.427 [2024-11-20 05:29:02.050702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.427 [2024-11-20 05:29:02.050789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.428 [2024-11-20 05:29:02.050859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.428 [2024-11-20 05:29:02.050873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.428 [2024-11-20 05:29:02.110624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.428 [2024-11-20 05:29:02.110733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.428 [2024-11-20 05:29:02.110753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:30.428 [2024-11-20 05:29:02.110766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.428 [2024-11-20 05:29:02.113236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.428 [2024-11-20 05:29:02.113279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.428 [2024-11-20 05:29:02.113385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:30.428 [2024-11-20 05:29:02.113438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.428 pt2 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.428 "name": "raid_bdev1", 00:18:30.428 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:30.428 "strip_size_kb": 0, 00:18:30.428 "state": "configuring", 00:18:30.428 "raid_level": "raid1", 00:18:30.428 "superblock": true, 00:18:30.428 "num_base_bdevs": 3, 00:18:30.428 "num_base_bdevs_discovered": 1, 00:18:30.428 "num_base_bdevs_operational": 2, 00:18:30.428 "base_bdevs_list": [ 00:18:30.428 { 00:18:30.428 "name": null, 00:18:30.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.428 "is_configured": false, 00:18:30.428 "data_offset": 2048, 00:18:30.428 "data_size": 63488 00:18:30.428 }, 00:18:30.428 { 00:18:30.428 "name": "pt2", 00:18:30.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.428 "is_configured": true, 00:18:30.428 "data_offset": 2048, 00:18:30.428 "data_size": 63488 00:18:30.428 }, 00:18:30.428 { 00:18:30.428 "name": null, 00:18:30.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.428 "is_configured": false, 00:18:30.428 "data_offset": 2048, 00:18:30.428 "data_size": 63488 00:18:30.428 } 00:18:30.428 ] 00:18:30.428 }' 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.428 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.689 [2024-11-20 05:29:02.442743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:30.689 [2024-11-20 05:29:02.442825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.689 [2024-11-20 05:29:02.442847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:30.689 [2024-11-20 05:29:02.442859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.689 [2024-11-20 05:29:02.443346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.689 [2024-11-20 05:29:02.443383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:30.689 [2024-11-20 05:29:02.443476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:30.689 [2024-11-20 05:29:02.443514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:30.689 [2024-11-20 05:29:02.443635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:30.689 [2024-11-20 05:29:02.443647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:30.689 [2024-11-20 05:29:02.443943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:30.689 [2024-11-20 05:29:02.444098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:30.689 [2024-11-20 05:29:02.444107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:30.689 [2024-11-20 05:29:02.444248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.689 pt3 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.689 "name": "raid_bdev1", 00:18:30.689 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:30.689 "strip_size_kb": 0, 00:18:30.689 "state": "online", 00:18:30.689 "raid_level": "raid1", 00:18:30.689 "superblock": true, 00:18:30.689 "num_base_bdevs": 3, 00:18:30.689 "num_base_bdevs_discovered": 2, 00:18:30.689 "num_base_bdevs_operational": 2, 00:18:30.689 "base_bdevs_list": [ 00:18:30.689 { 00:18:30.689 "name": null, 00:18:30.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.689 "is_configured": false, 00:18:30.689 "data_offset": 2048, 00:18:30.689 "data_size": 63488 00:18:30.689 }, 00:18:30.689 { 00:18:30.689 "name": "pt2", 00:18:30.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.689 "is_configured": true, 00:18:30.689 "data_offset": 2048, 00:18:30.689 "data_size": 63488 00:18:30.689 }, 00:18:30.689 { 00:18:30.689 "name": "pt3", 00:18:30.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.689 "is_configured": true, 00:18:30.689 "data_offset": 2048, 00:18:30.689 "data_size": 63488 00:18:30.689 } 00:18:30.689 ] 00:18:30.689 }' 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.689 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.258 [2024-11-20 05:29:02.794798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.258 [2024-11-20 05:29:02.794835] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.258 [2024-11-20 05:29:02.794918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.258 [2024-11-20 05:29:02.794989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.258 [2024-11-20 05:29:02.794999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.258 [2024-11-20 05:29:02.850808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.258 [2024-11-20 05:29:02.850952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.258 [2024-11-20 05:29:02.850996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:31.258 [2024-11-20 05:29:02.851049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.258 [2024-11-20 05:29:02.853494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.258 [2024-11-20 05:29:02.853602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.258 [2024-11-20 05:29:02.853735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:31.258 [2024-11-20 05:29:02.853841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.258 [2024-11-20 05:29:02.853986] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:31.258 [2024-11-20 05:29:02.853997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.258 [2024-11-20 05:29:02.854015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:31.258 [2024-11-20 05:29:02.854067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.258 pt1 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.258 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.259 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.259 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.259 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.259 "name": "raid_bdev1", 00:18:31.259 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:31.259 "strip_size_kb": 0, 00:18:31.259 "state": "configuring", 00:18:31.259 "raid_level": "raid1", 00:18:31.259 "superblock": true, 00:18:31.259 "num_base_bdevs": 3, 00:18:31.259 "num_base_bdevs_discovered": 1, 00:18:31.259 "num_base_bdevs_operational": 2, 00:18:31.259 "base_bdevs_list": [ 00:18:31.259 { 00:18:31.259 "name": null, 00:18:31.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.259 "is_configured": false, 00:18:31.259 "data_offset": 2048, 00:18:31.259 "data_size": 63488 00:18:31.259 }, 00:18:31.259 { 00:18:31.259 "name": "pt2", 00:18:31.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.259 "is_configured": true, 00:18:31.259 "data_offset": 2048, 00:18:31.259 "data_size": 63488 00:18:31.259 }, 00:18:31.259 { 00:18:31.259 "name": null, 00:18:31.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.259 "is_configured": false, 00:18:31.259 "data_offset": 2048, 00:18:31.259 "data_size": 63488 00:18:31.259 } 00:18:31.259 ] 00:18:31.259 }' 00:18:31.259 05:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.259 05:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.518 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:31.518 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:31.518 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 [2024-11-20 05:29:03.198930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:31.519 [2024-11-20 05:29:03.199002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.519 [2024-11-20 05:29:03.199027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:31.519 [2024-11-20 05:29:03.199036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.519 [2024-11-20 05:29:03.199556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.519 [2024-11-20 05:29:03.199573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:31.519 [2024-11-20 05:29:03.199662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:31.519 [2024-11-20 05:29:03.199705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:31.519 [2024-11-20 05:29:03.199842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:31.519 [2024-11-20 05:29:03.199851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:31.519 [2024-11-20 05:29:03.200111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:31.519 [2024-11-20 05:29:03.200261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:31.519 [2024-11-20 05:29:03.200273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:31.519 [2024-11-20 05:29:03.200421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.519 pt3 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.519 "name": "raid_bdev1", 00:18:31.519 "uuid": "1d897743-1b9f-431b-bf04-0c25b2ab271c", 00:18:31.519 "strip_size_kb": 0, 00:18:31.519 "state": "online", 00:18:31.519 "raid_level": "raid1", 00:18:31.519 "superblock": true, 00:18:31.519 "num_base_bdevs": 3, 00:18:31.519 "num_base_bdevs_discovered": 2, 00:18:31.519 "num_base_bdevs_operational": 2, 00:18:31.519 "base_bdevs_list": [ 00:18:31.519 { 00:18:31.519 "name": null, 00:18:31.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.519 "is_configured": false, 00:18:31.519 "data_offset": 2048, 00:18:31.519 "data_size": 63488 00:18:31.519 }, 00:18:31.519 { 00:18:31.519 "name": "pt2", 00:18:31.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.519 "is_configured": true, 00:18:31.519 "data_offset": 2048, 00:18:31.519 "data_size": 63488 00:18:31.519 }, 00:18:31.519 { 00:18:31.519 "name": "pt3", 00:18:31.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.519 "is_configured": true, 00:18:31.519 "data_offset": 2048, 00:18:31.519 "data_size": 63488 00:18:31.519 } 00:18:31.519 ] 00:18:31.519 }' 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.519 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.780 [2024-11-20 05:29:03.559321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1d897743-1b9f-431b-bf04-0c25b2ab271c '!=' 1d897743-1b9f-431b-bf04-0c25b2ab271c ']' 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67032 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67032 ']' 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67032 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:31.780 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67032 00:18:32.041 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:32.041 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:32.041 killing process with pid 67032 00:18:32.041 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67032' 00:18:32.041 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67032 00:18:32.041 [2024-11-20 05:29:03.613682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.041 05:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67032 00:18:32.041 [2024-11-20 05:29:03.613792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.041 [2024-11-20 05:29:03.613866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.041 [2024-11-20 05:29:03.613883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:32.041 [2024-11-20 05:29:03.816472] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.975 05:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:32.975 00:18:32.975 real 0m5.744s 00:18:32.975 user 0m8.937s 00:18:32.975 sys 0m0.969s 00:18:32.975 05:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:32.976 ************************************ 00:18:32.976 END TEST raid_superblock_test 00:18:32.976 ************************************ 00:18:32.976 05:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.976 05:29:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:18:32.976 05:29:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:32.976 05:29:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:32.976 05:29:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.976 ************************************ 00:18:32.976 START TEST raid_read_error_test 00:18:32.976 ************************************ 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IeRCfZZxp3 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67456 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67456 00:18:32.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67456 ']' 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.976 05:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:32.976 [2024-11-20 05:29:04.609925] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:32.976 [2024-11-20 05:29:04.610060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67456 ] 00:18:32.976 [2024-11-20 05:29:04.767320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.233 [2024-11-20 05:29:04.872719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.233 [2024-11-20 05:29:04.996900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.233 [2024-11-20 05:29:04.996943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.797 BaseBdev1_malloc 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.797 true 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.797 [2024-11-20 05:29:05.602532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:33.797 [2024-11-20 05:29:05.602595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.797 [2024-11-20 05:29:05.602616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:33.797 [2024-11-20 05:29:05.602627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.797 [2024-11-20 05:29:05.604643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.797 [2024-11-20 05:29:05.604678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.797 BaseBdev1 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.797 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.059 BaseBdev2_malloc 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.059 true 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.059 [2024-11-20 05:29:05.645029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:34.059 [2024-11-20 05:29:05.645087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.059 [2024-11-20 05:29:05.645104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:34.059 [2024-11-20 05:29:05.645113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.059 [2024-11-20 05:29:05.647440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.059 [2024-11-20 05:29:05.647489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.059 BaseBdev2 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.059 BaseBdev3_malloc 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.059 true 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.059 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.059 [2024-11-20 05:29:05.708175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:34.060 [2024-11-20 05:29:05.708234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.060 [2024-11-20 05:29:05.708254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:34.060 [2024-11-20 05:29:05.708265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.060 [2024-11-20 05:29:05.710586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.060 [2024-11-20 05:29:05.710622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:34.060 BaseBdev3 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.060 [2024-11-20 05:29:05.716245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.060 [2024-11-20 05:29:05.718236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.060 [2024-11-20 05:29:05.718318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:34.060 [2024-11-20 05:29:05.718537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:34.060 [2024-11-20 05:29:05.718548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:34.060 [2024-11-20 05:29:05.718809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:18:34.060 [2024-11-20 05:29:05.718978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:34.060 [2024-11-20 05:29:05.718996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:34.060 [2024-11-20 05:29:05.719135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.060 "name": "raid_bdev1", 00:18:34.060 "uuid": "1d2df603-7fa8-4b8a-b2a4-a94ff96c61f4", 00:18:34.060 "strip_size_kb": 0, 00:18:34.060 "state": "online", 00:18:34.060 "raid_level": "raid1", 00:18:34.060 "superblock": true, 00:18:34.060 "num_base_bdevs": 3, 00:18:34.060 "num_base_bdevs_discovered": 3, 00:18:34.060 "num_base_bdevs_operational": 3, 00:18:34.060 "base_bdevs_list": [ 00:18:34.060 { 00:18:34.060 "name": "BaseBdev1", 00:18:34.060 "uuid": "c2e44f96-5523-5beb-a3d5-9a6434d54dd9", 00:18:34.060 "is_configured": true, 00:18:34.060 "data_offset": 2048, 00:18:34.060 "data_size": 63488 00:18:34.060 }, 00:18:34.060 { 00:18:34.060 "name": "BaseBdev2", 00:18:34.060 "uuid": "c6c9958e-883f-5d8a-9484-729defef4692", 00:18:34.060 "is_configured": true, 00:18:34.060 "data_offset": 2048, 00:18:34.060 "data_size": 63488 00:18:34.060 }, 00:18:34.060 { 00:18:34.060 "name": "BaseBdev3", 00:18:34.060 "uuid": "350a7be4-b497-5456-953a-3f7dccfb248e", 00:18:34.060 "is_configured": true, 00:18:34.060 "data_offset": 2048, 00:18:34.060 "data_size": 63488 00:18:34.060 } 00:18:34.060 ] 00:18:34.060 }' 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.060 05:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.321 05:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:34.321 05:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:34.583 [2024-11-20 05:29:06.165544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.528 "name": "raid_bdev1", 00:18:35.528 "uuid": "1d2df603-7fa8-4b8a-b2a4-a94ff96c61f4", 00:18:35.528 "strip_size_kb": 0, 00:18:35.528 "state": "online", 00:18:35.528 "raid_level": "raid1", 00:18:35.528 "superblock": true, 00:18:35.528 "num_base_bdevs": 3, 00:18:35.528 "num_base_bdevs_discovered": 3, 00:18:35.528 "num_base_bdevs_operational": 3, 00:18:35.528 "base_bdevs_list": [ 00:18:35.528 { 00:18:35.528 "name": "BaseBdev1", 00:18:35.528 "uuid": "c2e44f96-5523-5beb-a3d5-9a6434d54dd9", 00:18:35.528 "is_configured": true, 00:18:35.528 "data_offset": 2048, 00:18:35.528 "data_size": 63488 00:18:35.528 }, 00:18:35.528 { 00:18:35.528 "name": "BaseBdev2", 00:18:35.528 "uuid": "c6c9958e-883f-5d8a-9484-729defef4692", 00:18:35.528 "is_configured": true, 00:18:35.528 "data_offset": 2048, 00:18:35.528 "data_size": 63488 00:18:35.528 }, 00:18:35.528 { 00:18:35.528 "name": "BaseBdev3", 00:18:35.528 "uuid": "350a7be4-b497-5456-953a-3f7dccfb248e", 00:18:35.528 "is_configured": true, 00:18:35.528 "data_offset": 2048, 00:18:35.528 "data_size": 63488 00:18:35.528 } 00:18:35.528 ] 00:18:35.528 }' 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.528 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.788 [2024-11-20 05:29:07.404689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.788 [2024-11-20 05:29:07.404737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.788 [2024-11-20 05:29:07.407809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.788 [2024-11-20 05:29:07.407863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.788 [2024-11-20 05:29:07.407981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.788 [2024-11-20 05:29:07.407992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:35.788 { 00:18:35.788 "results": [ 00:18:35.788 { 00:18:35.788 "job": "raid_bdev1", 00:18:35.788 "core_mask": "0x1", 00:18:35.788 "workload": "randrw", 00:18:35.788 "percentage": 50, 00:18:35.788 "status": "finished", 00:18:35.788 "queue_depth": 1, 00:18:35.788 "io_size": 131072, 00:18:35.788 "runtime": 1.236969, 00:18:35.788 "iops": 11083.543726641492, 00:18:35.788 "mibps": 1385.4429658301865, 00:18:35.788 "io_failed": 0, 00:18:35.788 "io_timeout": 0, 00:18:35.788 "avg_latency_us": 87.03772339112383, 00:18:35.788 "min_latency_us": 29.53846153846154, 00:18:35.788 "max_latency_us": 1739.2246153846154 00:18:35.788 } 00:18:35.788 ], 00:18:35.788 "core_count": 1 00:18:35.788 } 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67456 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67456 ']' 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67456 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67456 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:35.788 killing process with pid 67456 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67456' 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67456 00:18:35.788 05:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67456 00:18:35.788 [2024-11-20 05:29:07.432606] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.788 [2024-11-20 05:29:07.583981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IeRCfZZxp3 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:36.730 00:18:36.730 real 0m3.859s 00:18:36.730 user 0m4.608s 00:18:36.730 sys 0m0.453s 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:36.730 ************************************ 00:18:36.730 END TEST raid_read_error_test 00:18:36.730 05:29:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.730 ************************************ 00:18:36.730 05:29:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:18:36.730 05:29:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:36.730 05:29:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:36.730 05:29:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.730 ************************************ 00:18:36.730 START TEST raid_write_error_test 00:18:36.730 ************************************ 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Pg1aRepPES 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67596 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67596 00:18:36.730 05:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67596 ']' 00:18:36.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.731 05:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.731 05:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.731 05:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.731 05:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.731 05:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.731 05:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:36.731 [2024-11-20 05:29:08.520998] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:36.731 [2024-11-20 05:29:08.521451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67596 ] 00:18:36.993 [2024-11-20 05:29:08.683090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.993 [2024-11-20 05:29:08.805233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.254 [2024-11-20 05:29:08.953943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.254 [2024-11-20 05:29:08.954001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 BaseBdev1_malloc 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 true 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 [2024-11-20 05:29:09.420775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:37.825 [2024-11-20 05:29:09.420831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.825 [2024-11-20 05:29:09.420853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:37.825 [2024-11-20 05:29:09.420866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.825 [2024-11-20 05:29:09.423117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.825 [2024-11-20 05:29:09.423152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:37.825 BaseBdev1 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 BaseBdev2_malloc 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 true 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 [2024-11-20 05:29:09.470859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:37.825 [2024-11-20 05:29:09.470925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.825 [2024-11-20 05:29:09.470947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:37.825 [2024-11-20 05:29:09.470963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.825 [2024-11-20 05:29:09.473689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.825 [2024-11-20 05:29:09.473728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:37.825 BaseBdev2 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 BaseBdev3_malloc 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 true 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 [2024-11-20 05:29:09.543155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:37.825 [2024-11-20 05:29:09.543215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.825 [2024-11-20 05:29:09.543236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:37.825 [2024-11-20 05:29:09.543247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.825 [2024-11-20 05:29:09.545786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.825 [2024-11-20 05:29:09.545821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:37.825 BaseBdev3 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 [2024-11-20 05:29:09.551288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:37.825 [2024-11-20 05:29:09.554099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.825 [2024-11-20 05:29:09.554218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:37.825 [2024-11-20 05:29:09.554533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:37.825 [2024-11-20 05:29:09.554560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:37.825 [2024-11-20 05:29:09.554921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:18:37.825 [2024-11-20 05:29:09.555164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:37.825 [2024-11-20 05:29:09.555188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:37.825 [2024-11-20 05:29:09.555454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.825 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.826 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.826 "name": "raid_bdev1", 00:18:37.826 "uuid": "a72cbb6f-0ac5-4b30-b7d9-92cf572f1b60", 00:18:37.826 "strip_size_kb": 0, 00:18:37.826 "state": "online", 00:18:37.826 "raid_level": "raid1", 00:18:37.826 "superblock": true, 00:18:37.826 "num_base_bdevs": 3, 00:18:37.826 "num_base_bdevs_discovered": 3, 00:18:37.826 "num_base_bdevs_operational": 3, 00:18:37.826 "base_bdevs_list": [ 00:18:37.826 { 00:18:37.826 "name": "BaseBdev1", 00:18:37.826 "uuid": "2345cb23-8daa-50a0-b9a6-46c6f3802035", 00:18:37.826 "is_configured": true, 00:18:37.826 "data_offset": 2048, 00:18:37.826 "data_size": 63488 00:18:37.826 }, 00:18:37.826 { 00:18:37.826 "name": "BaseBdev2", 00:18:37.826 "uuid": "bcce0092-4112-571b-a54c-b5ffbdb75fca", 00:18:37.826 "is_configured": true, 00:18:37.826 "data_offset": 2048, 00:18:37.826 "data_size": 63488 00:18:37.826 }, 00:18:37.826 { 00:18:37.826 "name": "BaseBdev3", 00:18:37.826 "uuid": "e5efcb1e-9463-5462-a8f2-f297d29c5220", 00:18:37.826 "is_configured": true, 00:18:37.826 "data_offset": 2048, 00:18:37.826 "data_size": 63488 00:18:37.826 } 00:18:37.826 ] 00:18:37.826 }' 00:18:37.826 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.826 05:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.085 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:38.085 05:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:38.346 [2024-11-20 05:29:09.984568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.287 [2024-11-20 05:29:10.898083] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:39.287 [2024-11-20 05:29:10.898153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:39.287 [2024-11-20 05:29:10.898399] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.287 "name": "raid_bdev1", 00:18:39.287 "uuid": "a72cbb6f-0ac5-4b30-b7d9-92cf572f1b60", 00:18:39.287 "strip_size_kb": 0, 00:18:39.287 "state": "online", 00:18:39.287 "raid_level": "raid1", 00:18:39.287 "superblock": true, 00:18:39.287 "num_base_bdevs": 3, 00:18:39.287 "num_base_bdevs_discovered": 2, 00:18:39.287 "num_base_bdevs_operational": 2, 00:18:39.287 "base_bdevs_list": [ 00:18:39.287 { 00:18:39.287 "name": null, 00:18:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.287 "is_configured": false, 00:18:39.287 "data_offset": 0, 00:18:39.287 "data_size": 63488 00:18:39.287 }, 00:18:39.287 { 00:18:39.287 "name": "BaseBdev2", 00:18:39.287 "uuid": "bcce0092-4112-571b-a54c-b5ffbdb75fca", 00:18:39.287 "is_configured": true, 00:18:39.287 "data_offset": 2048, 00:18:39.287 "data_size": 63488 00:18:39.287 }, 00:18:39.287 { 00:18:39.287 "name": "BaseBdev3", 00:18:39.287 "uuid": "e5efcb1e-9463-5462-a8f2-f297d29c5220", 00:18:39.287 "is_configured": true, 00:18:39.287 "data_offset": 2048, 00:18:39.287 "data_size": 63488 00:18:39.287 } 00:18:39.287 ] 00:18:39.287 }' 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.287 05:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.548 [2024-11-20 05:29:11.244605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.548 [2024-11-20 05:29:11.244653] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.548 [2024-11-20 05:29:11.247808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.548 [2024-11-20 05:29:11.247872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.548 [2024-11-20 05:29:11.247965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.548 [2024-11-20 05:29:11.247986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:39.548 { 00:18:39.548 "results": [ 00:18:39.548 { 00:18:39.548 "job": "raid_bdev1", 00:18:39.548 "core_mask": "0x1", 00:18:39.548 "workload": "randrw", 00:18:39.548 "percentage": 50, 00:18:39.548 "status": "finished", 00:18:39.548 "queue_depth": 1, 00:18:39.548 "io_size": 131072, 00:18:39.548 "runtime": 1.257946, 00:18:39.548 "iops": 13619.026571887824, 00:18:39.548 "mibps": 1702.378321485978, 00:18:39.548 "io_failed": 0, 00:18:39.548 "io_timeout": 0, 00:18:39.548 "avg_latency_us": 70.23204870777133, 00:18:39.548 "min_latency_us": 29.735384615384614, 00:18:39.548 "max_latency_us": 1726.6215384615384 00:18:39.548 } 00:18:39.548 ], 00:18:39.548 "core_count": 1 00:18:39.548 } 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67596 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67596 ']' 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67596 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67596 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:39.548 killing process with pid 67596 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67596' 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67596 00:18:39.548 [2024-11-20 05:29:11.274627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:39.548 05:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67596 00:18:39.808 [2024-11-20 05:29:11.427719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Pg1aRepPES 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:40.375 00:18:40.375 real 0m3.695s 00:18:40.375 user 0m4.371s 00:18:40.375 sys 0m0.453s 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:40.375 ************************************ 00:18:40.375 05:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.375 END TEST raid_write_error_test 00:18:40.375 ************************************ 00:18:40.375 05:29:12 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:18:40.375 05:29:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:40.375 05:29:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:40.375 05:29:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:40.375 05:29:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:40.375 05:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.375 ************************************ 00:18:40.375 START TEST raid_state_function_test 00:18:40.376 ************************************ 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:40.376 Process raid pid: 67723 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67723 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67723' 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67723 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67723 ']' 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.376 05:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:40.637 [2024-11-20 05:29:12.266693] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:40.637 [2024-11-20 05:29:12.266828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.637 [2024-11-20 05:29:12.432703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.899 [2024-11-20 05:29:12.554987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.899 [2024-11-20 05:29:12.704823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.899 [2024-11-20 05:29:12.704879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.469 [2024-11-20 05:29:13.072731] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.469 [2024-11-20 05:29:13.072787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.469 [2024-11-20 05:29:13.072798] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.469 [2024-11-20 05:29:13.072809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.469 [2024-11-20 05:29:13.072815] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.469 [2024-11-20 05:29:13.072825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.469 [2024-11-20 05:29:13.072831] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:41.469 [2024-11-20 05:29:13.072840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.469 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.469 "name": "Existed_Raid", 00:18:41.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.469 "strip_size_kb": 64, 00:18:41.469 "state": "configuring", 00:18:41.469 "raid_level": "raid0", 00:18:41.469 "superblock": false, 00:18:41.469 "num_base_bdevs": 4, 00:18:41.469 "num_base_bdevs_discovered": 0, 00:18:41.470 "num_base_bdevs_operational": 4, 00:18:41.470 "base_bdevs_list": [ 00:18:41.470 { 00:18:41.470 "name": "BaseBdev1", 00:18:41.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.470 "is_configured": false, 00:18:41.470 "data_offset": 0, 00:18:41.470 "data_size": 0 00:18:41.470 }, 00:18:41.470 { 00:18:41.470 "name": "BaseBdev2", 00:18:41.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.470 "is_configured": false, 00:18:41.470 "data_offset": 0, 00:18:41.470 "data_size": 0 00:18:41.470 }, 00:18:41.470 { 00:18:41.470 "name": "BaseBdev3", 00:18:41.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.470 "is_configured": false, 00:18:41.470 "data_offset": 0, 00:18:41.470 "data_size": 0 00:18:41.470 }, 00:18:41.470 { 00:18:41.470 "name": "BaseBdev4", 00:18:41.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.470 "is_configured": false, 00:18:41.470 "data_offset": 0, 00:18:41.470 "data_size": 0 00:18:41.470 } 00:18:41.470 ] 00:18:41.470 }' 00:18:41.470 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.470 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.731 [2024-11-20 05:29:13.384743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:41.731 [2024-11-20 05:29:13.384786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.731 [2024-11-20 05:29:13.392741] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.731 [2024-11-20 05:29:13.392779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.731 [2024-11-20 05:29:13.392789] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.731 [2024-11-20 05:29:13.392799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.731 [2024-11-20 05:29:13.392806] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.731 [2024-11-20 05:29:13.392816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.731 [2024-11-20 05:29:13.392822] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:41.731 [2024-11-20 05:29:13.392832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.731 [2024-11-20 05:29:13.427611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.731 BaseBdev1 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.731 [ 00:18:41.731 { 00:18:41.731 "name": "BaseBdev1", 00:18:41.731 "aliases": [ 00:18:41.731 "a4128260-b82a-4624-89a3-f4e71b16d165" 00:18:41.731 ], 00:18:41.731 "product_name": "Malloc disk", 00:18:41.731 "block_size": 512, 00:18:41.731 "num_blocks": 65536, 00:18:41.731 "uuid": "a4128260-b82a-4624-89a3-f4e71b16d165", 00:18:41.731 "assigned_rate_limits": { 00:18:41.731 "rw_ios_per_sec": 0, 00:18:41.731 "rw_mbytes_per_sec": 0, 00:18:41.731 "r_mbytes_per_sec": 0, 00:18:41.731 "w_mbytes_per_sec": 0 00:18:41.731 }, 00:18:41.731 "claimed": true, 00:18:41.731 "claim_type": "exclusive_write", 00:18:41.731 "zoned": false, 00:18:41.731 "supported_io_types": { 00:18:41.731 "read": true, 00:18:41.731 "write": true, 00:18:41.731 "unmap": true, 00:18:41.731 "flush": true, 00:18:41.731 "reset": true, 00:18:41.731 "nvme_admin": false, 00:18:41.731 "nvme_io": false, 00:18:41.731 "nvme_io_md": false, 00:18:41.731 "write_zeroes": true, 00:18:41.731 "zcopy": true, 00:18:41.731 "get_zone_info": false, 00:18:41.731 "zone_management": false, 00:18:41.731 "zone_append": false, 00:18:41.731 "compare": false, 00:18:41.731 "compare_and_write": false, 00:18:41.731 "abort": true, 00:18:41.731 "seek_hole": false, 00:18:41.731 "seek_data": false, 00:18:41.731 "copy": true, 00:18:41.731 "nvme_iov_md": false 00:18:41.731 }, 00:18:41.731 "memory_domains": [ 00:18:41.731 { 00:18:41.731 "dma_device_id": "system", 00:18:41.731 "dma_device_type": 1 00:18:41.731 }, 00:18:41.731 { 00:18:41.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.731 "dma_device_type": 2 00:18:41.731 } 00:18:41.731 ], 00:18:41.731 "driver_specific": {} 00:18:41.731 } 00:18:41.731 ] 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.731 "name": "Existed_Raid", 00:18:41.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.731 "strip_size_kb": 64, 00:18:41.731 "state": "configuring", 00:18:41.731 "raid_level": "raid0", 00:18:41.731 "superblock": false, 00:18:41.731 "num_base_bdevs": 4, 00:18:41.731 "num_base_bdevs_discovered": 1, 00:18:41.731 "num_base_bdevs_operational": 4, 00:18:41.731 "base_bdevs_list": [ 00:18:41.731 { 00:18:41.731 "name": "BaseBdev1", 00:18:41.731 "uuid": "a4128260-b82a-4624-89a3-f4e71b16d165", 00:18:41.731 "is_configured": true, 00:18:41.731 "data_offset": 0, 00:18:41.731 "data_size": 65536 00:18:41.731 }, 00:18:41.731 { 00:18:41.731 "name": "BaseBdev2", 00:18:41.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.731 "is_configured": false, 00:18:41.731 "data_offset": 0, 00:18:41.731 "data_size": 0 00:18:41.731 }, 00:18:41.731 { 00:18:41.731 "name": "BaseBdev3", 00:18:41.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.731 "is_configured": false, 00:18:41.731 "data_offset": 0, 00:18:41.731 "data_size": 0 00:18:41.731 }, 00:18:41.731 { 00:18:41.731 "name": "BaseBdev4", 00:18:41.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.731 "is_configured": false, 00:18:41.731 "data_offset": 0, 00:18:41.731 "data_size": 0 00:18:41.731 } 00:18:41.731 ] 00:18:41.731 }' 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.731 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.991 [2024-11-20 05:29:13.771751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:41.991 [2024-11-20 05:29:13.771826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.991 [2024-11-20 05:29:13.779828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.991 [2024-11-20 05:29:13.781852] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.991 [2024-11-20 05:29:13.781895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.991 [2024-11-20 05:29:13.781905] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.991 [2024-11-20 05:29:13.781916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.991 [2024-11-20 05:29:13.781922] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:41.991 [2024-11-20 05:29:13.781931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:41.991 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.992 "name": "Existed_Raid", 00:18:41.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.992 "strip_size_kb": 64, 00:18:41.992 "state": "configuring", 00:18:41.992 "raid_level": "raid0", 00:18:41.992 "superblock": false, 00:18:41.992 "num_base_bdevs": 4, 00:18:41.992 "num_base_bdevs_discovered": 1, 00:18:41.992 "num_base_bdevs_operational": 4, 00:18:41.992 "base_bdevs_list": [ 00:18:41.992 { 00:18:41.992 "name": "BaseBdev1", 00:18:41.992 "uuid": "a4128260-b82a-4624-89a3-f4e71b16d165", 00:18:41.992 "is_configured": true, 00:18:41.992 "data_offset": 0, 00:18:41.992 "data_size": 65536 00:18:41.992 }, 00:18:41.992 { 00:18:41.992 "name": "BaseBdev2", 00:18:41.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.992 "is_configured": false, 00:18:41.992 "data_offset": 0, 00:18:41.992 "data_size": 0 00:18:41.992 }, 00:18:41.992 { 00:18:41.992 "name": "BaseBdev3", 00:18:41.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.992 "is_configured": false, 00:18:41.992 "data_offset": 0, 00:18:41.992 "data_size": 0 00:18:41.992 }, 00:18:41.992 { 00:18:41.992 "name": "BaseBdev4", 00:18:41.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.992 "is_configured": false, 00:18:41.992 "data_offset": 0, 00:18:41.992 "data_size": 0 00:18:41.992 } 00:18:41.992 ] 00:18:41.992 }' 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.992 05:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.250 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:42.250 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.250 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 [2024-11-20 05:29:14.092660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.509 BaseBdev2 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 [ 00:18:42.509 { 00:18:42.509 "name": "BaseBdev2", 00:18:42.509 "aliases": [ 00:18:42.509 "156a29b1-2092-477e-8a9b-cef3ce7cdd1d" 00:18:42.509 ], 00:18:42.509 "product_name": "Malloc disk", 00:18:42.509 "block_size": 512, 00:18:42.509 "num_blocks": 65536, 00:18:42.509 "uuid": "156a29b1-2092-477e-8a9b-cef3ce7cdd1d", 00:18:42.509 "assigned_rate_limits": { 00:18:42.509 "rw_ios_per_sec": 0, 00:18:42.509 "rw_mbytes_per_sec": 0, 00:18:42.509 "r_mbytes_per_sec": 0, 00:18:42.509 "w_mbytes_per_sec": 0 00:18:42.509 }, 00:18:42.509 "claimed": true, 00:18:42.509 "claim_type": "exclusive_write", 00:18:42.509 "zoned": false, 00:18:42.509 "supported_io_types": { 00:18:42.509 "read": true, 00:18:42.509 "write": true, 00:18:42.509 "unmap": true, 00:18:42.509 "flush": true, 00:18:42.509 "reset": true, 00:18:42.509 "nvme_admin": false, 00:18:42.509 "nvme_io": false, 00:18:42.509 "nvme_io_md": false, 00:18:42.509 "write_zeroes": true, 00:18:42.509 "zcopy": true, 00:18:42.509 "get_zone_info": false, 00:18:42.509 "zone_management": false, 00:18:42.509 "zone_append": false, 00:18:42.509 "compare": false, 00:18:42.509 "compare_and_write": false, 00:18:42.509 "abort": true, 00:18:42.509 "seek_hole": false, 00:18:42.509 "seek_data": false, 00:18:42.509 "copy": true, 00:18:42.509 "nvme_iov_md": false 00:18:42.509 }, 00:18:42.509 "memory_domains": [ 00:18:42.509 { 00:18:42.509 "dma_device_id": "system", 00:18:42.509 "dma_device_type": 1 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.509 "dma_device_type": 2 00:18:42.509 } 00:18:42.509 ], 00:18:42.509 "driver_specific": {} 00:18:42.509 } 00:18:42.509 ] 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.509 "name": "Existed_Raid", 00:18:42.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.509 "strip_size_kb": 64, 00:18:42.509 "state": "configuring", 00:18:42.509 "raid_level": "raid0", 00:18:42.509 "superblock": false, 00:18:42.509 "num_base_bdevs": 4, 00:18:42.509 "num_base_bdevs_discovered": 2, 00:18:42.509 "num_base_bdevs_operational": 4, 00:18:42.509 "base_bdevs_list": [ 00:18:42.509 { 00:18:42.509 "name": "BaseBdev1", 00:18:42.509 "uuid": "a4128260-b82a-4624-89a3-f4e71b16d165", 00:18:42.509 "is_configured": true, 00:18:42.509 "data_offset": 0, 00:18:42.509 "data_size": 65536 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "name": "BaseBdev2", 00:18:42.509 "uuid": "156a29b1-2092-477e-8a9b-cef3ce7cdd1d", 00:18:42.509 "is_configured": true, 00:18:42.509 "data_offset": 0, 00:18:42.509 "data_size": 65536 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "name": "BaseBdev3", 00:18:42.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.509 "is_configured": false, 00:18:42.509 "data_offset": 0, 00:18:42.509 "data_size": 0 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "name": "BaseBdev4", 00:18:42.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.509 "is_configured": false, 00:18:42.509 "data_offset": 0, 00:18:42.509 "data_size": 0 00:18:42.509 } 00:18:42.509 ] 00:18:42.509 }' 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.509 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.769 [2024-11-20 05:29:14.474463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:42.769 BaseBdev3 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.769 [ 00:18:42.769 { 00:18:42.769 "name": "BaseBdev3", 00:18:42.769 "aliases": [ 00:18:42.769 "6830b776-4807-49a2-a9e5-4a32b169a573" 00:18:42.769 ], 00:18:42.769 "product_name": "Malloc disk", 00:18:42.769 "block_size": 512, 00:18:42.769 "num_blocks": 65536, 00:18:42.769 "uuid": "6830b776-4807-49a2-a9e5-4a32b169a573", 00:18:42.769 "assigned_rate_limits": { 00:18:42.769 "rw_ios_per_sec": 0, 00:18:42.769 "rw_mbytes_per_sec": 0, 00:18:42.769 "r_mbytes_per_sec": 0, 00:18:42.769 "w_mbytes_per_sec": 0 00:18:42.769 }, 00:18:42.769 "claimed": true, 00:18:42.769 "claim_type": "exclusive_write", 00:18:42.769 "zoned": false, 00:18:42.769 "supported_io_types": { 00:18:42.769 "read": true, 00:18:42.769 "write": true, 00:18:42.769 "unmap": true, 00:18:42.769 "flush": true, 00:18:42.769 "reset": true, 00:18:42.769 "nvme_admin": false, 00:18:42.769 "nvme_io": false, 00:18:42.769 "nvme_io_md": false, 00:18:42.769 "write_zeroes": true, 00:18:42.769 "zcopy": true, 00:18:42.769 "get_zone_info": false, 00:18:42.769 "zone_management": false, 00:18:42.769 "zone_append": false, 00:18:42.769 "compare": false, 00:18:42.769 "compare_and_write": false, 00:18:42.769 "abort": true, 00:18:42.769 "seek_hole": false, 00:18:42.769 "seek_data": false, 00:18:42.769 "copy": true, 00:18:42.769 "nvme_iov_md": false 00:18:42.769 }, 00:18:42.769 "memory_domains": [ 00:18:42.769 { 00:18:42.769 "dma_device_id": "system", 00:18:42.769 "dma_device_type": 1 00:18:42.769 }, 00:18:42.769 { 00:18:42.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.769 "dma_device_type": 2 00:18:42.769 } 00:18:42.769 ], 00:18:42.769 "driver_specific": {} 00:18:42.769 } 00:18:42.769 ] 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.769 "name": "Existed_Raid", 00:18:42.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.769 "strip_size_kb": 64, 00:18:42.769 "state": "configuring", 00:18:42.769 "raid_level": "raid0", 00:18:42.769 "superblock": false, 00:18:42.769 "num_base_bdevs": 4, 00:18:42.769 "num_base_bdevs_discovered": 3, 00:18:42.769 "num_base_bdevs_operational": 4, 00:18:42.769 "base_bdevs_list": [ 00:18:42.769 { 00:18:42.769 "name": "BaseBdev1", 00:18:42.769 "uuid": "a4128260-b82a-4624-89a3-f4e71b16d165", 00:18:42.769 "is_configured": true, 00:18:42.769 "data_offset": 0, 00:18:42.769 "data_size": 65536 00:18:42.769 }, 00:18:42.769 { 00:18:42.769 "name": "BaseBdev2", 00:18:42.769 "uuid": "156a29b1-2092-477e-8a9b-cef3ce7cdd1d", 00:18:42.769 "is_configured": true, 00:18:42.769 "data_offset": 0, 00:18:42.769 "data_size": 65536 00:18:42.769 }, 00:18:42.769 { 00:18:42.769 "name": "BaseBdev3", 00:18:42.769 "uuid": "6830b776-4807-49a2-a9e5-4a32b169a573", 00:18:42.769 "is_configured": true, 00:18:42.769 "data_offset": 0, 00:18:42.769 "data_size": 65536 00:18:42.769 }, 00:18:42.769 { 00:18:42.769 "name": "BaseBdev4", 00:18:42.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.769 "is_configured": false, 00:18:42.769 "data_offset": 0, 00:18:42.769 "data_size": 0 00:18:42.769 } 00:18:42.769 ] 00:18:42.769 }' 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.769 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 [2024-11-20 05:29:14.855302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:43.028 [2024-11-20 05:29:14.855358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:43.028 [2024-11-20 05:29:14.855387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:43.028 [2024-11-20 05:29:14.855678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:43.028 [2024-11-20 05:29:14.855865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:43.028 [2024-11-20 05:29:14.855883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:43.028 [2024-11-20 05:29:14.856131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.028 BaseBdev4 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.028 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.390 [ 00:18:43.390 { 00:18:43.390 "name": "BaseBdev4", 00:18:43.390 "aliases": [ 00:18:43.390 "d94d5974-aca4-45fa-8a35-872e4d017cfe" 00:18:43.390 ], 00:18:43.390 "product_name": "Malloc disk", 00:18:43.390 "block_size": 512, 00:18:43.390 "num_blocks": 65536, 00:18:43.390 "uuid": "d94d5974-aca4-45fa-8a35-872e4d017cfe", 00:18:43.390 "assigned_rate_limits": { 00:18:43.390 "rw_ios_per_sec": 0, 00:18:43.390 "rw_mbytes_per_sec": 0, 00:18:43.390 "r_mbytes_per_sec": 0, 00:18:43.390 "w_mbytes_per_sec": 0 00:18:43.390 }, 00:18:43.390 "claimed": true, 00:18:43.390 "claim_type": "exclusive_write", 00:18:43.390 "zoned": false, 00:18:43.390 "supported_io_types": { 00:18:43.390 "read": true, 00:18:43.390 "write": true, 00:18:43.390 "unmap": true, 00:18:43.390 "flush": true, 00:18:43.390 "reset": true, 00:18:43.390 "nvme_admin": false, 00:18:43.390 "nvme_io": false, 00:18:43.390 "nvme_io_md": false, 00:18:43.390 "write_zeroes": true, 00:18:43.390 "zcopy": true, 00:18:43.390 "get_zone_info": false, 00:18:43.390 "zone_management": false, 00:18:43.390 "zone_append": false, 00:18:43.390 "compare": false, 00:18:43.390 "compare_and_write": false, 00:18:43.390 "abort": true, 00:18:43.390 "seek_hole": false, 00:18:43.390 "seek_data": false, 00:18:43.390 "copy": true, 00:18:43.390 "nvme_iov_md": false 00:18:43.390 }, 00:18:43.390 "memory_domains": [ 00:18:43.390 { 00:18:43.390 "dma_device_id": "system", 00:18:43.390 "dma_device_type": 1 00:18:43.390 }, 00:18:43.390 { 00:18:43.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.390 "dma_device_type": 2 00:18:43.390 } 00:18:43.390 ], 00:18:43.390 "driver_specific": {} 00:18:43.390 } 00:18:43.390 ] 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.390 "name": "Existed_Raid", 00:18:43.390 "uuid": "50073b9e-e064-4dfa-84af-6290d3adf056", 00:18:43.390 "strip_size_kb": 64, 00:18:43.390 "state": "online", 00:18:43.390 "raid_level": "raid0", 00:18:43.390 "superblock": false, 00:18:43.390 "num_base_bdevs": 4, 00:18:43.390 "num_base_bdevs_discovered": 4, 00:18:43.390 "num_base_bdevs_operational": 4, 00:18:43.390 "base_bdevs_list": [ 00:18:43.390 { 00:18:43.390 "name": "BaseBdev1", 00:18:43.390 "uuid": "a4128260-b82a-4624-89a3-f4e71b16d165", 00:18:43.390 "is_configured": true, 00:18:43.390 "data_offset": 0, 00:18:43.390 "data_size": 65536 00:18:43.390 }, 00:18:43.390 { 00:18:43.390 "name": "BaseBdev2", 00:18:43.390 "uuid": "156a29b1-2092-477e-8a9b-cef3ce7cdd1d", 00:18:43.390 "is_configured": true, 00:18:43.390 "data_offset": 0, 00:18:43.390 "data_size": 65536 00:18:43.390 }, 00:18:43.390 { 00:18:43.390 "name": "BaseBdev3", 00:18:43.390 "uuid": "6830b776-4807-49a2-a9e5-4a32b169a573", 00:18:43.390 "is_configured": true, 00:18:43.390 "data_offset": 0, 00:18:43.390 "data_size": 65536 00:18:43.390 }, 00:18:43.390 { 00:18:43.390 "name": "BaseBdev4", 00:18:43.390 "uuid": "d94d5974-aca4-45fa-8a35-872e4d017cfe", 00:18:43.390 "is_configured": true, 00:18:43.390 "data_offset": 0, 00:18:43.390 "data_size": 65536 00:18:43.390 } 00:18:43.390 ] 00:18:43.390 }' 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.390 05:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.662 [2024-11-20 05:29:15.191872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.662 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.662 "name": "Existed_Raid", 00:18:43.662 "aliases": [ 00:18:43.662 "50073b9e-e064-4dfa-84af-6290d3adf056" 00:18:43.662 ], 00:18:43.662 "product_name": "Raid Volume", 00:18:43.662 "block_size": 512, 00:18:43.662 "num_blocks": 262144, 00:18:43.662 "uuid": "50073b9e-e064-4dfa-84af-6290d3adf056", 00:18:43.662 "assigned_rate_limits": { 00:18:43.662 "rw_ios_per_sec": 0, 00:18:43.662 "rw_mbytes_per_sec": 0, 00:18:43.662 "r_mbytes_per_sec": 0, 00:18:43.662 "w_mbytes_per_sec": 0 00:18:43.662 }, 00:18:43.662 "claimed": false, 00:18:43.662 "zoned": false, 00:18:43.662 "supported_io_types": { 00:18:43.662 "read": true, 00:18:43.662 "write": true, 00:18:43.662 "unmap": true, 00:18:43.662 "flush": true, 00:18:43.662 "reset": true, 00:18:43.662 "nvme_admin": false, 00:18:43.662 "nvme_io": false, 00:18:43.662 "nvme_io_md": false, 00:18:43.662 "write_zeroes": true, 00:18:43.662 "zcopy": false, 00:18:43.662 "get_zone_info": false, 00:18:43.662 "zone_management": false, 00:18:43.662 "zone_append": false, 00:18:43.662 "compare": false, 00:18:43.662 "compare_and_write": false, 00:18:43.662 "abort": false, 00:18:43.662 "seek_hole": false, 00:18:43.662 "seek_data": false, 00:18:43.662 "copy": false, 00:18:43.662 "nvme_iov_md": false 00:18:43.662 }, 00:18:43.662 "memory_domains": [ 00:18:43.662 { 00:18:43.662 "dma_device_id": "system", 00:18:43.662 "dma_device_type": 1 00:18:43.662 }, 00:18:43.662 { 00:18:43.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.662 "dma_device_type": 2 00:18:43.662 }, 00:18:43.662 { 00:18:43.662 "dma_device_id": "system", 00:18:43.662 "dma_device_type": 1 00:18:43.662 }, 00:18:43.662 { 00:18:43.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.662 "dma_device_type": 2 00:18:43.662 }, 00:18:43.662 { 00:18:43.662 "dma_device_id": "system", 00:18:43.662 "dma_device_type": 1 00:18:43.662 }, 00:18:43.662 { 00:18:43.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.662 "dma_device_type": 2 00:18:43.662 }, 00:18:43.662 { 00:18:43.662 "dma_device_id": "system", 00:18:43.662 "dma_device_type": 1 00:18:43.662 }, 00:18:43.662 { 00:18:43.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.662 "dma_device_type": 2 00:18:43.662 } 00:18:43.662 ], 00:18:43.662 "driver_specific": { 00:18:43.662 "raid": { 00:18:43.662 "uuid": "50073b9e-e064-4dfa-84af-6290d3adf056", 00:18:43.662 "strip_size_kb": 64, 00:18:43.662 "state": "online", 00:18:43.662 "raid_level": "raid0", 00:18:43.662 "superblock": false, 00:18:43.662 "num_base_bdevs": 4, 00:18:43.662 "num_base_bdevs_discovered": 4, 00:18:43.662 "num_base_bdevs_operational": 4, 00:18:43.662 "base_bdevs_list": [ 00:18:43.662 { 00:18:43.662 "name": "BaseBdev1", 00:18:43.662 "uuid": "a4128260-b82a-4624-89a3-f4e71b16d165", 00:18:43.662 "is_configured": true, 00:18:43.662 "data_offset": 0, 00:18:43.662 "data_size": 65536 00:18:43.662 }, 00:18:43.662 { 00:18:43.662 "name": "BaseBdev2", 00:18:43.662 "uuid": "156a29b1-2092-477e-8a9b-cef3ce7cdd1d", 00:18:43.662 "is_configured": true, 00:18:43.662 "data_offset": 0, 00:18:43.662 "data_size": 65536 00:18:43.662 }, 00:18:43.662 { 00:18:43.662 "name": "BaseBdev3", 00:18:43.662 "uuid": "6830b776-4807-49a2-a9e5-4a32b169a573", 00:18:43.662 "is_configured": true, 00:18:43.663 "data_offset": 0, 00:18:43.663 "data_size": 65536 00:18:43.663 }, 00:18:43.663 { 00:18:43.663 "name": "BaseBdev4", 00:18:43.663 "uuid": "d94d5974-aca4-45fa-8a35-872e4d017cfe", 00:18:43.663 "is_configured": true, 00:18:43.663 "data_offset": 0, 00:18:43.663 "data_size": 65536 00:18:43.663 } 00:18:43.663 ] 00:18:43.663 } 00:18:43.663 } 00:18:43.663 }' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:43.663 BaseBdev2 00:18:43.663 BaseBdev3 00:18:43.663 BaseBdev4' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.663 [2024-11-20 05:29:15.415565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:43.663 [2024-11-20 05:29:15.415602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.663 [2024-11-20 05:29:15.415657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.663 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.922 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.922 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.922 "name": "Existed_Raid", 00:18:43.922 "uuid": "50073b9e-e064-4dfa-84af-6290d3adf056", 00:18:43.922 "strip_size_kb": 64, 00:18:43.922 "state": "offline", 00:18:43.922 "raid_level": "raid0", 00:18:43.922 "superblock": false, 00:18:43.922 "num_base_bdevs": 4, 00:18:43.922 "num_base_bdevs_discovered": 3, 00:18:43.922 "num_base_bdevs_operational": 3, 00:18:43.922 "base_bdevs_list": [ 00:18:43.922 { 00:18:43.922 "name": null, 00:18:43.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.922 "is_configured": false, 00:18:43.922 "data_offset": 0, 00:18:43.922 "data_size": 65536 00:18:43.922 }, 00:18:43.922 { 00:18:43.922 "name": "BaseBdev2", 00:18:43.922 "uuid": "156a29b1-2092-477e-8a9b-cef3ce7cdd1d", 00:18:43.922 "is_configured": true, 00:18:43.922 "data_offset": 0, 00:18:43.922 "data_size": 65536 00:18:43.922 }, 00:18:43.922 { 00:18:43.922 "name": "BaseBdev3", 00:18:43.922 "uuid": "6830b776-4807-49a2-a9e5-4a32b169a573", 00:18:43.922 "is_configured": true, 00:18:43.922 "data_offset": 0, 00:18:43.922 "data_size": 65536 00:18:43.922 }, 00:18:43.922 { 00:18:43.922 "name": "BaseBdev4", 00:18:43.922 "uuid": "d94d5974-aca4-45fa-8a35-872e4d017cfe", 00:18:43.922 "is_configured": true, 00:18:43.922 "data_offset": 0, 00:18:43.922 "data_size": 65536 00:18:43.922 } 00:18:43.922 ] 00:18:43.922 }' 00:18:43.922 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.922 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.183 [2024-11-20 05:29:15.865865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.183 05:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.183 [2024-11-20 05:29:15.968185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.444 [2024-11-20 05:29:16.073443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:44.444 [2024-11-20 05:29:16.073499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.444 BaseBdev2 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:44.444 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.445 [ 00:18:44.445 { 00:18:44.445 "name": "BaseBdev2", 00:18:44.445 "aliases": [ 00:18:44.445 "d6f7539e-3a06-4644-87f3-6324daf8a1b1" 00:18:44.445 ], 00:18:44.445 "product_name": "Malloc disk", 00:18:44.445 "block_size": 512, 00:18:44.445 "num_blocks": 65536, 00:18:44.445 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:44.445 "assigned_rate_limits": { 00:18:44.445 "rw_ios_per_sec": 0, 00:18:44.445 "rw_mbytes_per_sec": 0, 00:18:44.445 "r_mbytes_per_sec": 0, 00:18:44.445 "w_mbytes_per_sec": 0 00:18:44.445 }, 00:18:44.445 "claimed": false, 00:18:44.445 "zoned": false, 00:18:44.445 "supported_io_types": { 00:18:44.445 "read": true, 00:18:44.445 "write": true, 00:18:44.445 "unmap": true, 00:18:44.445 "flush": true, 00:18:44.445 "reset": true, 00:18:44.445 "nvme_admin": false, 00:18:44.445 "nvme_io": false, 00:18:44.445 "nvme_io_md": false, 00:18:44.445 "write_zeroes": true, 00:18:44.445 "zcopy": true, 00:18:44.445 "get_zone_info": false, 00:18:44.445 "zone_management": false, 00:18:44.445 "zone_append": false, 00:18:44.445 "compare": false, 00:18:44.445 "compare_and_write": false, 00:18:44.445 "abort": true, 00:18:44.445 "seek_hole": false, 00:18:44.445 "seek_data": false, 00:18:44.445 "copy": true, 00:18:44.445 "nvme_iov_md": false 00:18:44.445 }, 00:18:44.445 "memory_domains": [ 00:18:44.445 { 00:18:44.445 "dma_device_id": "system", 00:18:44.445 "dma_device_type": 1 00:18:44.445 }, 00:18:44.445 { 00:18:44.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.445 "dma_device_type": 2 00:18:44.445 } 00:18:44.445 ], 00:18:44.445 "driver_specific": {} 00:18:44.445 } 00:18:44.445 ] 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.445 BaseBdev3 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.445 [ 00:18:44.445 { 00:18:44.445 "name": "BaseBdev3", 00:18:44.445 "aliases": [ 00:18:44.445 "923a0634-c4ea-45b0-88bd-025b88589522" 00:18:44.445 ], 00:18:44.445 "product_name": "Malloc disk", 00:18:44.445 "block_size": 512, 00:18:44.445 "num_blocks": 65536, 00:18:44.445 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:44.445 "assigned_rate_limits": { 00:18:44.445 "rw_ios_per_sec": 0, 00:18:44.445 "rw_mbytes_per_sec": 0, 00:18:44.445 "r_mbytes_per_sec": 0, 00:18:44.445 "w_mbytes_per_sec": 0 00:18:44.445 }, 00:18:44.445 "claimed": false, 00:18:44.445 "zoned": false, 00:18:44.445 "supported_io_types": { 00:18:44.445 "read": true, 00:18:44.445 "write": true, 00:18:44.445 "unmap": true, 00:18:44.445 "flush": true, 00:18:44.445 "reset": true, 00:18:44.445 "nvme_admin": false, 00:18:44.445 "nvme_io": false, 00:18:44.445 "nvme_io_md": false, 00:18:44.445 "write_zeroes": true, 00:18:44.445 "zcopy": true, 00:18:44.445 "get_zone_info": false, 00:18:44.445 "zone_management": false, 00:18:44.445 "zone_append": false, 00:18:44.445 "compare": false, 00:18:44.445 "compare_and_write": false, 00:18:44.445 "abort": true, 00:18:44.445 "seek_hole": false, 00:18:44.445 "seek_data": false, 00:18:44.445 "copy": true, 00:18:44.445 "nvme_iov_md": false 00:18:44.445 }, 00:18:44.445 "memory_domains": [ 00:18:44.445 { 00:18:44.445 "dma_device_id": "system", 00:18:44.445 "dma_device_type": 1 00:18:44.445 }, 00:18:44.445 { 00:18:44.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.445 "dma_device_type": 2 00:18:44.445 } 00:18:44.445 ], 00:18:44.445 "driver_specific": {} 00:18:44.445 } 00:18:44.445 ] 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.445 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.707 BaseBdev4 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.707 [ 00:18:44.707 { 00:18:44.707 "name": "BaseBdev4", 00:18:44.707 "aliases": [ 00:18:44.707 "44d46282-766a-41e1-b294-8af8fe378135" 00:18:44.707 ], 00:18:44.707 "product_name": "Malloc disk", 00:18:44.707 "block_size": 512, 00:18:44.707 "num_blocks": 65536, 00:18:44.707 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:44.707 "assigned_rate_limits": { 00:18:44.707 "rw_ios_per_sec": 0, 00:18:44.707 "rw_mbytes_per_sec": 0, 00:18:44.707 "r_mbytes_per_sec": 0, 00:18:44.707 "w_mbytes_per_sec": 0 00:18:44.707 }, 00:18:44.707 "claimed": false, 00:18:44.707 "zoned": false, 00:18:44.707 "supported_io_types": { 00:18:44.707 "read": true, 00:18:44.707 "write": true, 00:18:44.707 "unmap": true, 00:18:44.707 "flush": true, 00:18:44.707 "reset": true, 00:18:44.707 "nvme_admin": false, 00:18:44.707 "nvme_io": false, 00:18:44.707 "nvme_io_md": false, 00:18:44.707 "write_zeroes": true, 00:18:44.707 "zcopy": true, 00:18:44.707 "get_zone_info": false, 00:18:44.707 "zone_management": false, 00:18:44.707 "zone_append": false, 00:18:44.707 "compare": false, 00:18:44.707 "compare_and_write": false, 00:18:44.707 "abort": true, 00:18:44.707 "seek_hole": false, 00:18:44.707 "seek_data": false, 00:18:44.707 "copy": true, 00:18:44.707 "nvme_iov_md": false 00:18:44.707 }, 00:18:44.707 "memory_domains": [ 00:18:44.707 { 00:18:44.707 "dma_device_id": "system", 00:18:44.707 "dma_device_type": 1 00:18:44.707 }, 00:18:44.707 { 00:18:44.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.707 "dma_device_type": 2 00:18:44.707 } 00:18:44.707 ], 00:18:44.707 "driver_specific": {} 00:18:44.707 } 00:18:44.707 ] 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.707 [2024-11-20 05:29:16.332901] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:44.707 [2024-11-20 05:29:16.332949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:44.707 [2024-11-20 05:29:16.332973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.707 [2024-11-20 05:29:16.334980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:44.707 [2024-11-20 05:29:16.335034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.707 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.707 "name": "Existed_Raid", 00:18:44.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.708 "strip_size_kb": 64, 00:18:44.708 "state": "configuring", 00:18:44.708 "raid_level": "raid0", 00:18:44.708 "superblock": false, 00:18:44.708 "num_base_bdevs": 4, 00:18:44.708 "num_base_bdevs_discovered": 3, 00:18:44.708 "num_base_bdevs_operational": 4, 00:18:44.708 "base_bdevs_list": [ 00:18:44.708 { 00:18:44.708 "name": "BaseBdev1", 00:18:44.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.708 "is_configured": false, 00:18:44.708 "data_offset": 0, 00:18:44.708 "data_size": 0 00:18:44.708 }, 00:18:44.708 { 00:18:44.708 "name": "BaseBdev2", 00:18:44.708 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:44.708 "is_configured": true, 00:18:44.708 "data_offset": 0, 00:18:44.708 "data_size": 65536 00:18:44.708 }, 00:18:44.708 { 00:18:44.708 "name": "BaseBdev3", 00:18:44.708 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:44.708 "is_configured": true, 00:18:44.708 "data_offset": 0, 00:18:44.708 "data_size": 65536 00:18:44.708 }, 00:18:44.708 { 00:18:44.708 "name": "BaseBdev4", 00:18:44.708 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:44.708 "is_configured": true, 00:18:44.708 "data_offset": 0, 00:18:44.708 "data_size": 65536 00:18:44.708 } 00:18:44.708 ] 00:18:44.708 }' 00:18:44.708 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.708 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.969 [2024-11-20 05:29:16.649006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.969 "name": "Existed_Raid", 00:18:44.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.969 "strip_size_kb": 64, 00:18:44.969 "state": "configuring", 00:18:44.969 "raid_level": "raid0", 00:18:44.969 "superblock": false, 00:18:44.969 "num_base_bdevs": 4, 00:18:44.969 "num_base_bdevs_discovered": 2, 00:18:44.969 "num_base_bdevs_operational": 4, 00:18:44.969 "base_bdevs_list": [ 00:18:44.969 { 00:18:44.969 "name": "BaseBdev1", 00:18:44.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.969 "is_configured": false, 00:18:44.969 "data_offset": 0, 00:18:44.969 "data_size": 0 00:18:44.969 }, 00:18:44.969 { 00:18:44.969 "name": null, 00:18:44.969 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:44.969 "is_configured": false, 00:18:44.969 "data_offset": 0, 00:18:44.969 "data_size": 65536 00:18:44.969 }, 00:18:44.969 { 00:18:44.969 "name": "BaseBdev3", 00:18:44.969 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:44.969 "is_configured": true, 00:18:44.969 "data_offset": 0, 00:18:44.969 "data_size": 65536 00:18:44.969 }, 00:18:44.969 { 00:18:44.969 "name": "BaseBdev4", 00:18:44.969 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:44.969 "is_configured": true, 00:18:44.969 "data_offset": 0, 00:18:44.969 "data_size": 65536 00:18:44.969 } 00:18:44.969 ] 00:18:44.969 }' 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.969 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.229 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.229 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:45.229 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.229 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.229 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.229 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:45.229 05:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:45.229 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.229 05:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.229 [2024-11-20 05:29:17.026130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.229 BaseBdev1 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.229 [ 00:18:45.229 { 00:18:45.229 "name": "BaseBdev1", 00:18:45.229 "aliases": [ 00:18:45.229 "5c21d777-5445-4e62-a61b-b7907a9929b2" 00:18:45.229 ], 00:18:45.229 "product_name": "Malloc disk", 00:18:45.229 "block_size": 512, 00:18:45.229 "num_blocks": 65536, 00:18:45.229 "uuid": "5c21d777-5445-4e62-a61b-b7907a9929b2", 00:18:45.229 "assigned_rate_limits": { 00:18:45.229 "rw_ios_per_sec": 0, 00:18:45.229 "rw_mbytes_per_sec": 0, 00:18:45.229 "r_mbytes_per_sec": 0, 00:18:45.229 "w_mbytes_per_sec": 0 00:18:45.229 }, 00:18:45.229 "claimed": true, 00:18:45.229 "claim_type": "exclusive_write", 00:18:45.229 "zoned": false, 00:18:45.229 "supported_io_types": { 00:18:45.229 "read": true, 00:18:45.229 "write": true, 00:18:45.229 "unmap": true, 00:18:45.229 "flush": true, 00:18:45.229 "reset": true, 00:18:45.229 "nvme_admin": false, 00:18:45.229 "nvme_io": false, 00:18:45.229 "nvme_io_md": false, 00:18:45.229 "write_zeroes": true, 00:18:45.229 "zcopy": true, 00:18:45.229 "get_zone_info": false, 00:18:45.229 "zone_management": false, 00:18:45.229 "zone_append": false, 00:18:45.229 "compare": false, 00:18:45.229 "compare_and_write": false, 00:18:45.229 "abort": true, 00:18:45.229 "seek_hole": false, 00:18:45.229 "seek_data": false, 00:18:45.229 "copy": true, 00:18:45.229 "nvme_iov_md": false 00:18:45.229 }, 00:18:45.229 "memory_domains": [ 00:18:45.229 { 00:18:45.229 "dma_device_id": "system", 00:18:45.229 "dma_device_type": 1 00:18:45.229 }, 00:18:45.229 { 00:18:45.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.229 "dma_device_type": 2 00:18:45.229 } 00:18:45.229 ], 00:18:45.229 "driver_specific": {} 00:18:45.229 } 00:18:45.229 ] 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.229 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.230 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.230 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.230 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.230 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.230 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.230 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.230 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.230 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.488 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.488 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.488 "name": "Existed_Raid", 00:18:45.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.488 "strip_size_kb": 64, 00:18:45.488 "state": "configuring", 00:18:45.488 "raid_level": "raid0", 00:18:45.488 "superblock": false, 00:18:45.488 "num_base_bdevs": 4, 00:18:45.488 "num_base_bdevs_discovered": 3, 00:18:45.488 "num_base_bdevs_operational": 4, 00:18:45.488 "base_bdevs_list": [ 00:18:45.488 { 00:18:45.488 "name": "BaseBdev1", 00:18:45.488 "uuid": "5c21d777-5445-4e62-a61b-b7907a9929b2", 00:18:45.488 "is_configured": true, 00:18:45.488 "data_offset": 0, 00:18:45.488 "data_size": 65536 00:18:45.488 }, 00:18:45.488 { 00:18:45.488 "name": null, 00:18:45.488 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:45.488 "is_configured": false, 00:18:45.488 "data_offset": 0, 00:18:45.488 "data_size": 65536 00:18:45.488 }, 00:18:45.488 { 00:18:45.488 "name": "BaseBdev3", 00:18:45.488 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:45.488 "is_configured": true, 00:18:45.488 "data_offset": 0, 00:18:45.488 "data_size": 65536 00:18:45.488 }, 00:18:45.488 { 00:18:45.488 "name": "BaseBdev4", 00:18:45.488 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:45.488 "is_configured": true, 00:18:45.488 "data_offset": 0, 00:18:45.489 "data_size": 65536 00:18:45.489 } 00:18:45.489 ] 00:18:45.489 }' 00:18:45.489 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.489 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.746 [2024-11-20 05:29:17.410322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.746 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.746 "name": "Existed_Raid", 00:18:45.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.746 "strip_size_kb": 64, 00:18:45.746 "state": "configuring", 00:18:45.746 "raid_level": "raid0", 00:18:45.746 "superblock": false, 00:18:45.746 "num_base_bdevs": 4, 00:18:45.746 "num_base_bdevs_discovered": 2, 00:18:45.746 "num_base_bdevs_operational": 4, 00:18:45.746 "base_bdevs_list": [ 00:18:45.746 { 00:18:45.746 "name": "BaseBdev1", 00:18:45.746 "uuid": "5c21d777-5445-4e62-a61b-b7907a9929b2", 00:18:45.746 "is_configured": true, 00:18:45.746 "data_offset": 0, 00:18:45.746 "data_size": 65536 00:18:45.746 }, 00:18:45.746 { 00:18:45.746 "name": null, 00:18:45.746 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:45.746 "is_configured": false, 00:18:45.746 "data_offset": 0, 00:18:45.746 "data_size": 65536 00:18:45.746 }, 00:18:45.746 { 00:18:45.747 "name": null, 00:18:45.747 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:45.747 "is_configured": false, 00:18:45.747 "data_offset": 0, 00:18:45.747 "data_size": 65536 00:18:45.747 }, 00:18:45.747 { 00:18:45.747 "name": "BaseBdev4", 00:18:45.747 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:45.747 "is_configured": true, 00:18:45.747 "data_offset": 0, 00:18:45.747 "data_size": 65536 00:18:45.747 } 00:18:45.747 ] 00:18:45.747 }' 00:18:45.747 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.747 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.005 [2024-11-20 05:29:17.782408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.005 "name": "Existed_Raid", 00:18:46.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.005 "strip_size_kb": 64, 00:18:46.005 "state": "configuring", 00:18:46.005 "raid_level": "raid0", 00:18:46.005 "superblock": false, 00:18:46.005 "num_base_bdevs": 4, 00:18:46.005 "num_base_bdevs_discovered": 3, 00:18:46.005 "num_base_bdevs_operational": 4, 00:18:46.005 "base_bdevs_list": [ 00:18:46.005 { 00:18:46.005 "name": "BaseBdev1", 00:18:46.005 "uuid": "5c21d777-5445-4e62-a61b-b7907a9929b2", 00:18:46.005 "is_configured": true, 00:18:46.005 "data_offset": 0, 00:18:46.005 "data_size": 65536 00:18:46.005 }, 00:18:46.005 { 00:18:46.005 "name": null, 00:18:46.005 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:46.005 "is_configured": false, 00:18:46.005 "data_offset": 0, 00:18:46.005 "data_size": 65536 00:18:46.005 }, 00:18:46.005 { 00:18:46.005 "name": "BaseBdev3", 00:18:46.005 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:46.005 "is_configured": true, 00:18:46.005 "data_offset": 0, 00:18:46.005 "data_size": 65536 00:18:46.005 }, 00:18:46.005 { 00:18:46.005 "name": "BaseBdev4", 00:18:46.005 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:46.005 "is_configured": true, 00:18:46.005 "data_offset": 0, 00:18:46.005 "data_size": 65536 00:18:46.005 } 00:18:46.005 ] 00:18:46.005 }' 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.005 05:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.571 [2024-11-20 05:29:18.138519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.571 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.571 "name": "Existed_Raid", 00:18:46.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.571 "strip_size_kb": 64, 00:18:46.571 "state": "configuring", 00:18:46.571 "raid_level": "raid0", 00:18:46.571 "superblock": false, 00:18:46.571 "num_base_bdevs": 4, 00:18:46.571 "num_base_bdevs_discovered": 2, 00:18:46.571 "num_base_bdevs_operational": 4, 00:18:46.571 "base_bdevs_list": [ 00:18:46.571 { 00:18:46.571 "name": null, 00:18:46.571 "uuid": "5c21d777-5445-4e62-a61b-b7907a9929b2", 00:18:46.571 "is_configured": false, 00:18:46.571 "data_offset": 0, 00:18:46.571 "data_size": 65536 00:18:46.571 }, 00:18:46.571 { 00:18:46.571 "name": null, 00:18:46.571 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:46.571 "is_configured": false, 00:18:46.571 "data_offset": 0, 00:18:46.571 "data_size": 65536 00:18:46.571 }, 00:18:46.571 { 00:18:46.571 "name": "BaseBdev3", 00:18:46.571 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:46.571 "is_configured": true, 00:18:46.571 "data_offset": 0, 00:18:46.571 "data_size": 65536 00:18:46.571 }, 00:18:46.571 { 00:18:46.571 "name": "BaseBdev4", 00:18:46.571 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:46.572 "is_configured": true, 00:18:46.572 "data_offset": 0, 00:18:46.572 "data_size": 65536 00:18:46.572 } 00:18:46.572 ] 00:18:46.572 }' 00:18:46.572 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.572 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.830 [2024-11-20 05:29:18.528420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.830 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.831 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.831 "name": "Existed_Raid", 00:18:46.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.831 "strip_size_kb": 64, 00:18:46.831 "state": "configuring", 00:18:46.831 "raid_level": "raid0", 00:18:46.831 "superblock": false, 00:18:46.831 "num_base_bdevs": 4, 00:18:46.831 "num_base_bdevs_discovered": 3, 00:18:46.831 "num_base_bdevs_operational": 4, 00:18:46.831 "base_bdevs_list": [ 00:18:46.831 { 00:18:46.831 "name": null, 00:18:46.831 "uuid": "5c21d777-5445-4e62-a61b-b7907a9929b2", 00:18:46.831 "is_configured": false, 00:18:46.831 "data_offset": 0, 00:18:46.831 "data_size": 65536 00:18:46.831 }, 00:18:46.831 { 00:18:46.831 "name": "BaseBdev2", 00:18:46.831 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:46.831 "is_configured": true, 00:18:46.831 "data_offset": 0, 00:18:46.831 "data_size": 65536 00:18:46.831 }, 00:18:46.831 { 00:18:46.831 "name": "BaseBdev3", 00:18:46.831 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:46.831 "is_configured": true, 00:18:46.831 "data_offset": 0, 00:18:46.831 "data_size": 65536 00:18:46.831 }, 00:18:46.831 { 00:18:46.831 "name": "BaseBdev4", 00:18:46.831 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:46.831 "is_configured": true, 00:18:46.831 "data_offset": 0, 00:18:46.831 "data_size": 65536 00:18:46.831 } 00:18:46.831 ] 00:18:46.831 }' 00:18:46.831 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.831 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5c21d777-5445-4e62-a61b-b7907a9929b2 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 [2024-11-20 05:29:18.917179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:47.090 [2024-11-20 05:29:18.917234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:47.090 [2024-11-20 05:29:18.917241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:47.090 [2024-11-20 05:29:18.917500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:47.090 [2024-11-20 05:29:18.917617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:47.090 [2024-11-20 05:29:18.917626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:47.090 [2024-11-20 05:29:18.917835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.090 NewBaseBdev 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.090 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.348 [ 00:18:47.348 { 00:18:47.348 "name": "NewBaseBdev", 00:18:47.348 "aliases": [ 00:18:47.348 "5c21d777-5445-4e62-a61b-b7907a9929b2" 00:18:47.348 ], 00:18:47.348 "product_name": "Malloc disk", 00:18:47.348 "block_size": 512, 00:18:47.348 "num_blocks": 65536, 00:18:47.348 "uuid": "5c21d777-5445-4e62-a61b-b7907a9929b2", 00:18:47.348 "assigned_rate_limits": { 00:18:47.348 "rw_ios_per_sec": 0, 00:18:47.348 "rw_mbytes_per_sec": 0, 00:18:47.348 "r_mbytes_per_sec": 0, 00:18:47.348 "w_mbytes_per_sec": 0 00:18:47.348 }, 00:18:47.348 "claimed": true, 00:18:47.348 "claim_type": "exclusive_write", 00:18:47.348 "zoned": false, 00:18:47.348 "supported_io_types": { 00:18:47.348 "read": true, 00:18:47.348 "write": true, 00:18:47.348 "unmap": true, 00:18:47.348 "flush": true, 00:18:47.348 "reset": true, 00:18:47.348 "nvme_admin": false, 00:18:47.348 "nvme_io": false, 00:18:47.348 "nvme_io_md": false, 00:18:47.348 "write_zeroes": true, 00:18:47.348 "zcopy": true, 00:18:47.348 "get_zone_info": false, 00:18:47.348 "zone_management": false, 00:18:47.348 "zone_append": false, 00:18:47.348 "compare": false, 00:18:47.348 "compare_and_write": false, 00:18:47.348 "abort": true, 00:18:47.348 "seek_hole": false, 00:18:47.348 "seek_data": false, 00:18:47.348 "copy": true, 00:18:47.348 "nvme_iov_md": false 00:18:47.348 }, 00:18:47.348 "memory_domains": [ 00:18:47.348 { 00:18:47.348 "dma_device_id": "system", 00:18:47.348 "dma_device_type": 1 00:18:47.348 }, 00:18:47.348 { 00:18:47.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.348 "dma_device_type": 2 00:18:47.348 } 00:18:47.348 ], 00:18:47.348 "driver_specific": {} 00:18:47.348 } 00:18:47.348 ] 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.348 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.349 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.349 "name": "Existed_Raid", 00:18:47.349 "uuid": "30d09e57-c166-4d2d-9bd9-05cf21c25b0f", 00:18:47.349 "strip_size_kb": 64, 00:18:47.349 "state": "online", 00:18:47.349 "raid_level": "raid0", 00:18:47.349 "superblock": false, 00:18:47.349 "num_base_bdevs": 4, 00:18:47.349 "num_base_bdevs_discovered": 4, 00:18:47.349 "num_base_bdevs_operational": 4, 00:18:47.349 "base_bdevs_list": [ 00:18:47.349 { 00:18:47.349 "name": "NewBaseBdev", 00:18:47.349 "uuid": "5c21d777-5445-4e62-a61b-b7907a9929b2", 00:18:47.349 "is_configured": true, 00:18:47.349 "data_offset": 0, 00:18:47.349 "data_size": 65536 00:18:47.349 }, 00:18:47.349 { 00:18:47.349 "name": "BaseBdev2", 00:18:47.349 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:47.349 "is_configured": true, 00:18:47.349 "data_offset": 0, 00:18:47.349 "data_size": 65536 00:18:47.349 }, 00:18:47.349 { 00:18:47.349 "name": "BaseBdev3", 00:18:47.349 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:47.349 "is_configured": true, 00:18:47.349 "data_offset": 0, 00:18:47.349 "data_size": 65536 00:18:47.349 }, 00:18:47.349 { 00:18:47.349 "name": "BaseBdev4", 00:18:47.349 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:47.349 "is_configured": true, 00:18:47.349 "data_offset": 0, 00:18:47.349 "data_size": 65536 00:18:47.349 } 00:18:47.349 ] 00:18:47.349 }' 00:18:47.349 05:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.349 05:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.605 [2024-11-20 05:29:19.253634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.605 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.605 "name": "Existed_Raid", 00:18:47.605 "aliases": [ 00:18:47.605 "30d09e57-c166-4d2d-9bd9-05cf21c25b0f" 00:18:47.605 ], 00:18:47.605 "product_name": "Raid Volume", 00:18:47.605 "block_size": 512, 00:18:47.605 "num_blocks": 262144, 00:18:47.605 "uuid": "30d09e57-c166-4d2d-9bd9-05cf21c25b0f", 00:18:47.605 "assigned_rate_limits": { 00:18:47.605 "rw_ios_per_sec": 0, 00:18:47.605 "rw_mbytes_per_sec": 0, 00:18:47.605 "r_mbytes_per_sec": 0, 00:18:47.605 "w_mbytes_per_sec": 0 00:18:47.605 }, 00:18:47.605 "claimed": false, 00:18:47.605 "zoned": false, 00:18:47.605 "supported_io_types": { 00:18:47.605 "read": true, 00:18:47.606 "write": true, 00:18:47.606 "unmap": true, 00:18:47.606 "flush": true, 00:18:47.606 "reset": true, 00:18:47.606 "nvme_admin": false, 00:18:47.606 "nvme_io": false, 00:18:47.606 "nvme_io_md": false, 00:18:47.606 "write_zeroes": true, 00:18:47.606 "zcopy": false, 00:18:47.606 "get_zone_info": false, 00:18:47.606 "zone_management": false, 00:18:47.606 "zone_append": false, 00:18:47.606 "compare": false, 00:18:47.606 "compare_and_write": false, 00:18:47.606 "abort": false, 00:18:47.606 "seek_hole": false, 00:18:47.606 "seek_data": false, 00:18:47.606 "copy": false, 00:18:47.606 "nvme_iov_md": false 00:18:47.606 }, 00:18:47.606 "memory_domains": [ 00:18:47.606 { 00:18:47.606 "dma_device_id": "system", 00:18:47.606 "dma_device_type": 1 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.606 "dma_device_type": 2 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "dma_device_id": "system", 00:18:47.606 "dma_device_type": 1 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.606 "dma_device_type": 2 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "dma_device_id": "system", 00:18:47.606 "dma_device_type": 1 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.606 "dma_device_type": 2 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "dma_device_id": "system", 00:18:47.606 "dma_device_type": 1 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.606 "dma_device_type": 2 00:18:47.606 } 00:18:47.606 ], 00:18:47.606 "driver_specific": { 00:18:47.606 "raid": { 00:18:47.606 "uuid": "30d09e57-c166-4d2d-9bd9-05cf21c25b0f", 00:18:47.606 "strip_size_kb": 64, 00:18:47.606 "state": "online", 00:18:47.606 "raid_level": "raid0", 00:18:47.606 "superblock": false, 00:18:47.606 "num_base_bdevs": 4, 00:18:47.606 "num_base_bdevs_discovered": 4, 00:18:47.606 "num_base_bdevs_operational": 4, 00:18:47.606 "base_bdevs_list": [ 00:18:47.606 { 00:18:47.606 "name": "NewBaseBdev", 00:18:47.606 "uuid": "5c21d777-5445-4e62-a61b-b7907a9929b2", 00:18:47.606 "is_configured": true, 00:18:47.606 "data_offset": 0, 00:18:47.606 "data_size": 65536 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "name": "BaseBdev2", 00:18:47.606 "uuid": "d6f7539e-3a06-4644-87f3-6324daf8a1b1", 00:18:47.606 "is_configured": true, 00:18:47.606 "data_offset": 0, 00:18:47.606 "data_size": 65536 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "name": "BaseBdev3", 00:18:47.606 "uuid": "923a0634-c4ea-45b0-88bd-025b88589522", 00:18:47.606 "is_configured": true, 00:18:47.606 "data_offset": 0, 00:18:47.606 "data_size": 65536 00:18:47.606 }, 00:18:47.606 { 00:18:47.606 "name": "BaseBdev4", 00:18:47.606 "uuid": "44d46282-766a-41e1-b294-8af8fe378135", 00:18:47.606 "is_configured": true, 00:18:47.606 "data_offset": 0, 00:18:47.606 "data_size": 65536 00:18:47.606 } 00:18:47.606 ] 00:18:47.606 } 00:18:47.606 } 00:18:47.606 }' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:47.606 BaseBdev2 00:18:47.606 BaseBdev3 00:18:47.606 BaseBdev4' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.606 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.863 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.863 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.863 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.863 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.863 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:47.863 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.864 [2024-11-20 05:29:19.501335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.864 [2024-11-20 05:29:19.501378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.864 [2024-11-20 05:29:19.501462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.864 [2024-11-20 05:29:19.501528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.864 [2024-11-20 05:29:19.501539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67723 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67723 ']' 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67723 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67723 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:47.864 killing process with pid 67723 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67723' 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67723 00:18:47.864 [2024-11-20 05:29:19.528512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.864 05:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67723 00:18:48.122 [2024-11-20 05:29:19.738091] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:48.688 00:18:48.688 real 0m8.160s 00:18:48.688 user 0m12.975s 00:18:48.688 sys 0m1.413s 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.688 ************************************ 00:18:48.688 END TEST raid_state_function_test 00:18:48.688 ************************************ 00:18:48.688 05:29:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:48.688 05:29:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:48.688 05:29:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:48.688 05:29:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.688 ************************************ 00:18:48.688 START TEST raid_state_function_test_sb 00:18:48.688 ************************************ 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68361 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68361' 00:18:48.688 Process raid pid: 68361 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68361 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68361 ']' 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:48.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:48.688 05:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.688 [2024-11-20 05:29:20.469712] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:48.688 [2024-11-20 05:29:20.469853] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.947 [2024-11-20 05:29:20.625996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.947 [2024-11-20 05:29:20.745390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.205 [2024-11-20 05:29:20.895203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.205 [2024-11-20 05:29:20.895256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.770 [2024-11-20 05:29:21.327861] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:49.770 [2024-11-20 05:29:21.327919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:49.770 [2024-11-20 05:29:21.327930] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:49.770 [2024-11-20 05:29:21.327940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:49.770 [2024-11-20 05:29:21.327947] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:49.770 [2024-11-20 05:29:21.327956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:49.770 [2024-11-20 05:29:21.327962] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:49.770 [2024-11-20 05:29:21.327971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.770 "name": "Existed_Raid", 00:18:49.770 "uuid": "0cd361fe-1327-48df-8213-16f615c2f834", 00:18:49.770 "strip_size_kb": 64, 00:18:49.770 "state": "configuring", 00:18:49.770 "raid_level": "raid0", 00:18:49.770 "superblock": true, 00:18:49.770 "num_base_bdevs": 4, 00:18:49.770 "num_base_bdevs_discovered": 0, 00:18:49.770 "num_base_bdevs_operational": 4, 00:18:49.770 "base_bdevs_list": [ 00:18:49.770 { 00:18:49.770 "name": "BaseBdev1", 00:18:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.770 "is_configured": false, 00:18:49.770 "data_offset": 0, 00:18:49.770 "data_size": 0 00:18:49.770 }, 00:18:49.770 { 00:18:49.770 "name": "BaseBdev2", 00:18:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.770 "is_configured": false, 00:18:49.770 "data_offset": 0, 00:18:49.770 "data_size": 0 00:18:49.770 }, 00:18:49.770 { 00:18:49.770 "name": "BaseBdev3", 00:18:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.770 "is_configured": false, 00:18:49.770 "data_offset": 0, 00:18:49.770 "data_size": 0 00:18:49.770 }, 00:18:49.770 { 00:18:49.770 "name": "BaseBdev4", 00:18:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.770 "is_configured": false, 00:18:49.770 "data_offset": 0, 00:18:49.770 "data_size": 0 00:18:49.770 } 00:18:49.770 ] 00:18:49.770 }' 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.770 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.029 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:50.029 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.029 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.029 [2024-11-20 05:29:21.651867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.030 [2024-11-20 05:29:21.651911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.030 [2024-11-20 05:29:21.659862] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.030 [2024-11-20 05:29:21.659902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.030 [2024-11-20 05:29:21.659911] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.030 [2024-11-20 05:29:21.659921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.030 [2024-11-20 05:29:21.659927] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.030 [2024-11-20 05:29:21.659937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.030 [2024-11-20 05:29:21.659943] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:50.030 [2024-11-20 05:29:21.659951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.030 [2024-11-20 05:29:21.694728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.030 BaseBdev1 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.030 [ 00:18:50.030 { 00:18:50.030 "name": "BaseBdev1", 00:18:50.030 "aliases": [ 00:18:50.030 "71875e50-e2e2-4336-8f88-92565e6a4f8b" 00:18:50.030 ], 00:18:50.030 "product_name": "Malloc disk", 00:18:50.030 "block_size": 512, 00:18:50.030 "num_blocks": 65536, 00:18:50.030 "uuid": "71875e50-e2e2-4336-8f88-92565e6a4f8b", 00:18:50.030 "assigned_rate_limits": { 00:18:50.030 "rw_ios_per_sec": 0, 00:18:50.030 "rw_mbytes_per_sec": 0, 00:18:50.030 "r_mbytes_per_sec": 0, 00:18:50.030 "w_mbytes_per_sec": 0 00:18:50.030 }, 00:18:50.030 "claimed": true, 00:18:50.030 "claim_type": "exclusive_write", 00:18:50.030 "zoned": false, 00:18:50.030 "supported_io_types": { 00:18:50.030 "read": true, 00:18:50.030 "write": true, 00:18:50.030 "unmap": true, 00:18:50.030 "flush": true, 00:18:50.030 "reset": true, 00:18:50.030 "nvme_admin": false, 00:18:50.030 "nvme_io": false, 00:18:50.030 "nvme_io_md": false, 00:18:50.030 "write_zeroes": true, 00:18:50.030 "zcopy": true, 00:18:50.030 "get_zone_info": false, 00:18:50.030 "zone_management": false, 00:18:50.030 "zone_append": false, 00:18:50.030 "compare": false, 00:18:50.030 "compare_and_write": false, 00:18:50.030 "abort": true, 00:18:50.030 "seek_hole": false, 00:18:50.030 "seek_data": false, 00:18:50.030 "copy": true, 00:18:50.030 "nvme_iov_md": false 00:18:50.030 }, 00:18:50.030 "memory_domains": [ 00:18:50.030 { 00:18:50.030 "dma_device_id": "system", 00:18:50.030 "dma_device_type": 1 00:18:50.030 }, 00:18:50.030 { 00:18:50.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.030 "dma_device_type": 2 00:18:50.030 } 00:18:50.030 ], 00:18:50.030 "driver_specific": {} 00:18:50.030 } 00:18:50.030 ] 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.030 "name": "Existed_Raid", 00:18:50.030 "uuid": "58adcb29-cf04-4ddc-846b-c93936b03c52", 00:18:50.030 "strip_size_kb": 64, 00:18:50.030 "state": "configuring", 00:18:50.030 "raid_level": "raid0", 00:18:50.030 "superblock": true, 00:18:50.030 "num_base_bdevs": 4, 00:18:50.030 "num_base_bdevs_discovered": 1, 00:18:50.030 "num_base_bdevs_operational": 4, 00:18:50.030 "base_bdevs_list": [ 00:18:50.030 { 00:18:50.030 "name": "BaseBdev1", 00:18:50.030 "uuid": "71875e50-e2e2-4336-8f88-92565e6a4f8b", 00:18:50.030 "is_configured": true, 00:18:50.030 "data_offset": 2048, 00:18:50.030 "data_size": 63488 00:18:50.030 }, 00:18:50.030 { 00:18:50.030 "name": "BaseBdev2", 00:18:50.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.030 "is_configured": false, 00:18:50.030 "data_offset": 0, 00:18:50.030 "data_size": 0 00:18:50.030 }, 00:18:50.030 { 00:18:50.030 "name": "BaseBdev3", 00:18:50.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.030 "is_configured": false, 00:18:50.030 "data_offset": 0, 00:18:50.030 "data_size": 0 00:18:50.030 }, 00:18:50.030 { 00:18:50.030 "name": "BaseBdev4", 00:18:50.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.030 "is_configured": false, 00:18:50.030 "data_offset": 0, 00:18:50.030 "data_size": 0 00:18:50.030 } 00:18:50.030 ] 00:18:50.030 }' 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.030 05:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.289 [2024-11-20 05:29:22.018863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.289 [2024-11-20 05:29:22.018927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.289 [2024-11-20 05:29:22.026913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.289 [2024-11-20 05:29:22.028931] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.289 [2024-11-20 05:29:22.028978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.289 [2024-11-20 05:29:22.028988] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.289 [2024-11-20 05:29:22.029000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.289 [2024-11-20 05:29:22.029007] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:50.289 [2024-11-20 05:29:22.029016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.289 "name": "Existed_Raid", 00:18:50.289 "uuid": "d6cf6f0c-072e-42fe-ad0c-4a854c492e55", 00:18:50.289 "strip_size_kb": 64, 00:18:50.289 "state": "configuring", 00:18:50.289 "raid_level": "raid0", 00:18:50.289 "superblock": true, 00:18:50.289 "num_base_bdevs": 4, 00:18:50.289 "num_base_bdevs_discovered": 1, 00:18:50.289 "num_base_bdevs_operational": 4, 00:18:50.289 "base_bdevs_list": [ 00:18:50.289 { 00:18:50.289 "name": "BaseBdev1", 00:18:50.289 "uuid": "71875e50-e2e2-4336-8f88-92565e6a4f8b", 00:18:50.289 "is_configured": true, 00:18:50.289 "data_offset": 2048, 00:18:50.289 "data_size": 63488 00:18:50.289 }, 00:18:50.289 { 00:18:50.289 "name": "BaseBdev2", 00:18:50.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.289 "is_configured": false, 00:18:50.289 "data_offset": 0, 00:18:50.289 "data_size": 0 00:18:50.289 }, 00:18:50.289 { 00:18:50.289 "name": "BaseBdev3", 00:18:50.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.289 "is_configured": false, 00:18:50.289 "data_offset": 0, 00:18:50.289 "data_size": 0 00:18:50.289 }, 00:18:50.289 { 00:18:50.289 "name": "BaseBdev4", 00:18:50.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.289 "is_configured": false, 00:18:50.289 "data_offset": 0, 00:18:50.289 "data_size": 0 00:18:50.289 } 00:18:50.289 ] 00:18:50.289 }' 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.289 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.547 [2024-11-20 05:29:22.363830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.547 BaseBdev2 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.547 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.805 [ 00:18:50.805 { 00:18:50.805 "name": "BaseBdev2", 00:18:50.805 "aliases": [ 00:18:50.805 "88f6c505-b755-45d1-bac9-d7784c6cc554" 00:18:50.805 ], 00:18:50.806 "product_name": "Malloc disk", 00:18:50.806 "block_size": 512, 00:18:50.806 "num_blocks": 65536, 00:18:50.806 "uuid": "88f6c505-b755-45d1-bac9-d7784c6cc554", 00:18:50.806 "assigned_rate_limits": { 00:18:50.806 "rw_ios_per_sec": 0, 00:18:50.806 "rw_mbytes_per_sec": 0, 00:18:50.806 "r_mbytes_per_sec": 0, 00:18:50.806 "w_mbytes_per_sec": 0 00:18:50.806 }, 00:18:50.806 "claimed": true, 00:18:50.806 "claim_type": "exclusive_write", 00:18:50.806 "zoned": false, 00:18:50.806 "supported_io_types": { 00:18:50.806 "read": true, 00:18:50.806 "write": true, 00:18:50.806 "unmap": true, 00:18:50.806 "flush": true, 00:18:50.806 "reset": true, 00:18:50.806 "nvme_admin": false, 00:18:50.806 "nvme_io": false, 00:18:50.806 "nvme_io_md": false, 00:18:50.806 "write_zeroes": true, 00:18:50.806 "zcopy": true, 00:18:50.806 "get_zone_info": false, 00:18:50.806 "zone_management": false, 00:18:50.806 "zone_append": false, 00:18:50.806 "compare": false, 00:18:50.806 "compare_and_write": false, 00:18:50.806 "abort": true, 00:18:50.806 "seek_hole": false, 00:18:50.806 "seek_data": false, 00:18:50.806 "copy": true, 00:18:50.806 "nvme_iov_md": false 00:18:50.806 }, 00:18:50.806 "memory_domains": [ 00:18:50.806 { 00:18:50.806 "dma_device_id": "system", 00:18:50.806 "dma_device_type": 1 00:18:50.806 }, 00:18:50.806 { 00:18:50.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.806 "dma_device_type": 2 00:18:50.806 } 00:18:50.806 ], 00:18:50.806 "driver_specific": {} 00:18:50.806 } 00:18:50.806 ] 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.806 "name": "Existed_Raid", 00:18:50.806 "uuid": "d6cf6f0c-072e-42fe-ad0c-4a854c492e55", 00:18:50.806 "strip_size_kb": 64, 00:18:50.806 "state": "configuring", 00:18:50.806 "raid_level": "raid0", 00:18:50.806 "superblock": true, 00:18:50.806 "num_base_bdevs": 4, 00:18:50.806 "num_base_bdevs_discovered": 2, 00:18:50.806 "num_base_bdevs_operational": 4, 00:18:50.806 "base_bdevs_list": [ 00:18:50.806 { 00:18:50.806 "name": "BaseBdev1", 00:18:50.806 "uuid": "71875e50-e2e2-4336-8f88-92565e6a4f8b", 00:18:50.806 "is_configured": true, 00:18:50.806 "data_offset": 2048, 00:18:50.806 "data_size": 63488 00:18:50.806 }, 00:18:50.806 { 00:18:50.806 "name": "BaseBdev2", 00:18:50.806 "uuid": "88f6c505-b755-45d1-bac9-d7784c6cc554", 00:18:50.806 "is_configured": true, 00:18:50.806 "data_offset": 2048, 00:18:50.806 "data_size": 63488 00:18:50.806 }, 00:18:50.806 { 00:18:50.806 "name": "BaseBdev3", 00:18:50.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.806 "is_configured": false, 00:18:50.806 "data_offset": 0, 00:18:50.806 "data_size": 0 00:18:50.806 }, 00:18:50.806 { 00:18:50.806 "name": "BaseBdev4", 00:18:50.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.806 "is_configured": false, 00:18:50.806 "data_offset": 0, 00:18:50.806 "data_size": 0 00:18:50.806 } 00:18:50.806 ] 00:18:50.806 }' 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.806 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.064 [2024-11-20 05:29:22.734593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:51.064 BaseBdev3 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:51.064 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.065 [ 00:18:51.065 { 00:18:51.065 "name": "BaseBdev3", 00:18:51.065 "aliases": [ 00:18:51.065 "b52f2d0e-1a09-49c2-ad23-c238733e28ab" 00:18:51.065 ], 00:18:51.065 "product_name": "Malloc disk", 00:18:51.065 "block_size": 512, 00:18:51.065 "num_blocks": 65536, 00:18:51.065 "uuid": "b52f2d0e-1a09-49c2-ad23-c238733e28ab", 00:18:51.065 "assigned_rate_limits": { 00:18:51.065 "rw_ios_per_sec": 0, 00:18:51.065 "rw_mbytes_per_sec": 0, 00:18:51.065 "r_mbytes_per_sec": 0, 00:18:51.065 "w_mbytes_per_sec": 0 00:18:51.065 }, 00:18:51.065 "claimed": true, 00:18:51.065 "claim_type": "exclusive_write", 00:18:51.065 "zoned": false, 00:18:51.065 "supported_io_types": { 00:18:51.065 "read": true, 00:18:51.065 "write": true, 00:18:51.065 "unmap": true, 00:18:51.065 "flush": true, 00:18:51.065 "reset": true, 00:18:51.065 "nvme_admin": false, 00:18:51.065 "nvme_io": false, 00:18:51.065 "nvme_io_md": false, 00:18:51.065 "write_zeroes": true, 00:18:51.065 "zcopy": true, 00:18:51.065 "get_zone_info": false, 00:18:51.065 "zone_management": false, 00:18:51.065 "zone_append": false, 00:18:51.065 "compare": false, 00:18:51.065 "compare_and_write": false, 00:18:51.065 "abort": true, 00:18:51.065 "seek_hole": false, 00:18:51.065 "seek_data": false, 00:18:51.065 "copy": true, 00:18:51.065 "nvme_iov_md": false 00:18:51.065 }, 00:18:51.065 "memory_domains": [ 00:18:51.065 { 00:18:51.065 "dma_device_id": "system", 00:18:51.065 "dma_device_type": 1 00:18:51.065 }, 00:18:51.065 { 00:18:51.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.065 "dma_device_type": 2 00:18:51.065 } 00:18:51.065 ], 00:18:51.065 "driver_specific": {} 00:18:51.065 } 00:18:51.065 ] 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.065 "name": "Existed_Raid", 00:18:51.065 "uuid": "d6cf6f0c-072e-42fe-ad0c-4a854c492e55", 00:18:51.065 "strip_size_kb": 64, 00:18:51.065 "state": "configuring", 00:18:51.065 "raid_level": "raid0", 00:18:51.065 "superblock": true, 00:18:51.065 "num_base_bdevs": 4, 00:18:51.065 "num_base_bdevs_discovered": 3, 00:18:51.065 "num_base_bdevs_operational": 4, 00:18:51.065 "base_bdevs_list": [ 00:18:51.065 { 00:18:51.065 "name": "BaseBdev1", 00:18:51.065 "uuid": "71875e50-e2e2-4336-8f88-92565e6a4f8b", 00:18:51.065 "is_configured": true, 00:18:51.065 "data_offset": 2048, 00:18:51.065 "data_size": 63488 00:18:51.065 }, 00:18:51.065 { 00:18:51.065 "name": "BaseBdev2", 00:18:51.065 "uuid": "88f6c505-b755-45d1-bac9-d7784c6cc554", 00:18:51.065 "is_configured": true, 00:18:51.065 "data_offset": 2048, 00:18:51.065 "data_size": 63488 00:18:51.065 }, 00:18:51.065 { 00:18:51.065 "name": "BaseBdev3", 00:18:51.065 "uuid": "b52f2d0e-1a09-49c2-ad23-c238733e28ab", 00:18:51.065 "is_configured": true, 00:18:51.065 "data_offset": 2048, 00:18:51.065 "data_size": 63488 00:18:51.065 }, 00:18:51.065 { 00:18:51.065 "name": "BaseBdev4", 00:18:51.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.065 "is_configured": false, 00:18:51.065 "data_offset": 0, 00:18:51.065 "data_size": 0 00:18:51.065 } 00:18:51.065 ] 00:18:51.065 }' 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.065 05:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.324 [2024-11-20 05:29:23.080875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:51.324 [2024-11-20 05:29:23.081153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:51.324 [2024-11-20 05:29:23.081172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:51.324 BaseBdev4 00:18:51.324 [2024-11-20 05:29:23.081468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:51.324 [2024-11-20 05:29:23.081629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:51.324 [2024-11-20 05:29:23.081641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:51.324 [2024-11-20 05:29:23.081787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.324 [ 00:18:51.324 { 00:18:51.324 "name": "BaseBdev4", 00:18:51.324 "aliases": [ 00:18:51.324 "f4b00cef-83d8-4ce2-9984-db417b5d16ef" 00:18:51.324 ], 00:18:51.324 "product_name": "Malloc disk", 00:18:51.324 "block_size": 512, 00:18:51.324 "num_blocks": 65536, 00:18:51.324 "uuid": "f4b00cef-83d8-4ce2-9984-db417b5d16ef", 00:18:51.324 "assigned_rate_limits": { 00:18:51.324 "rw_ios_per_sec": 0, 00:18:51.324 "rw_mbytes_per_sec": 0, 00:18:51.324 "r_mbytes_per_sec": 0, 00:18:51.324 "w_mbytes_per_sec": 0 00:18:51.324 }, 00:18:51.324 "claimed": true, 00:18:51.324 "claim_type": "exclusive_write", 00:18:51.324 "zoned": false, 00:18:51.324 "supported_io_types": { 00:18:51.324 "read": true, 00:18:51.324 "write": true, 00:18:51.324 "unmap": true, 00:18:51.324 "flush": true, 00:18:51.324 "reset": true, 00:18:51.324 "nvme_admin": false, 00:18:51.324 "nvme_io": false, 00:18:51.324 "nvme_io_md": false, 00:18:51.324 "write_zeroes": true, 00:18:51.324 "zcopy": true, 00:18:51.324 "get_zone_info": false, 00:18:51.324 "zone_management": false, 00:18:51.324 "zone_append": false, 00:18:51.324 "compare": false, 00:18:51.324 "compare_and_write": false, 00:18:51.324 "abort": true, 00:18:51.324 "seek_hole": false, 00:18:51.324 "seek_data": false, 00:18:51.324 "copy": true, 00:18:51.324 "nvme_iov_md": false 00:18:51.324 }, 00:18:51.324 "memory_domains": [ 00:18:51.324 { 00:18:51.324 "dma_device_id": "system", 00:18:51.324 "dma_device_type": 1 00:18:51.324 }, 00:18:51.324 { 00:18:51.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.324 "dma_device_type": 2 00:18:51.324 } 00:18:51.324 ], 00:18:51.324 "driver_specific": {} 00:18:51.324 } 00:18:51.324 ] 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.324 "name": "Existed_Raid", 00:18:51.324 "uuid": "d6cf6f0c-072e-42fe-ad0c-4a854c492e55", 00:18:51.324 "strip_size_kb": 64, 00:18:51.324 "state": "online", 00:18:51.324 "raid_level": "raid0", 00:18:51.324 "superblock": true, 00:18:51.324 "num_base_bdevs": 4, 00:18:51.324 "num_base_bdevs_discovered": 4, 00:18:51.324 "num_base_bdevs_operational": 4, 00:18:51.324 "base_bdevs_list": [ 00:18:51.324 { 00:18:51.324 "name": "BaseBdev1", 00:18:51.324 "uuid": "71875e50-e2e2-4336-8f88-92565e6a4f8b", 00:18:51.324 "is_configured": true, 00:18:51.324 "data_offset": 2048, 00:18:51.324 "data_size": 63488 00:18:51.324 }, 00:18:51.324 { 00:18:51.324 "name": "BaseBdev2", 00:18:51.324 "uuid": "88f6c505-b755-45d1-bac9-d7784c6cc554", 00:18:51.324 "is_configured": true, 00:18:51.324 "data_offset": 2048, 00:18:51.324 "data_size": 63488 00:18:51.324 }, 00:18:51.324 { 00:18:51.324 "name": "BaseBdev3", 00:18:51.324 "uuid": "b52f2d0e-1a09-49c2-ad23-c238733e28ab", 00:18:51.324 "is_configured": true, 00:18:51.324 "data_offset": 2048, 00:18:51.324 "data_size": 63488 00:18:51.324 }, 00:18:51.324 { 00:18:51.324 "name": "BaseBdev4", 00:18:51.324 "uuid": "f4b00cef-83d8-4ce2-9984-db417b5d16ef", 00:18:51.324 "is_configured": true, 00:18:51.324 "data_offset": 2048, 00:18:51.324 "data_size": 63488 00:18:51.324 } 00:18:51.324 ] 00:18:51.324 }' 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.324 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 [2024-11-20 05:29:23.425426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.890 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:51.890 "name": "Existed_Raid", 00:18:51.890 "aliases": [ 00:18:51.890 "d6cf6f0c-072e-42fe-ad0c-4a854c492e55" 00:18:51.890 ], 00:18:51.890 "product_name": "Raid Volume", 00:18:51.890 "block_size": 512, 00:18:51.890 "num_blocks": 253952, 00:18:51.890 "uuid": "d6cf6f0c-072e-42fe-ad0c-4a854c492e55", 00:18:51.890 "assigned_rate_limits": { 00:18:51.890 "rw_ios_per_sec": 0, 00:18:51.890 "rw_mbytes_per_sec": 0, 00:18:51.890 "r_mbytes_per_sec": 0, 00:18:51.890 "w_mbytes_per_sec": 0 00:18:51.890 }, 00:18:51.890 "claimed": false, 00:18:51.890 "zoned": false, 00:18:51.890 "supported_io_types": { 00:18:51.890 "read": true, 00:18:51.890 "write": true, 00:18:51.890 "unmap": true, 00:18:51.890 "flush": true, 00:18:51.890 "reset": true, 00:18:51.890 "nvme_admin": false, 00:18:51.890 "nvme_io": false, 00:18:51.890 "nvme_io_md": false, 00:18:51.890 "write_zeroes": true, 00:18:51.890 "zcopy": false, 00:18:51.890 "get_zone_info": false, 00:18:51.890 "zone_management": false, 00:18:51.890 "zone_append": false, 00:18:51.890 "compare": false, 00:18:51.890 "compare_and_write": false, 00:18:51.890 "abort": false, 00:18:51.890 "seek_hole": false, 00:18:51.890 "seek_data": false, 00:18:51.891 "copy": false, 00:18:51.891 "nvme_iov_md": false 00:18:51.891 }, 00:18:51.891 "memory_domains": [ 00:18:51.891 { 00:18:51.891 "dma_device_id": "system", 00:18:51.891 "dma_device_type": 1 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.891 "dma_device_type": 2 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "dma_device_id": "system", 00:18:51.891 "dma_device_type": 1 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.891 "dma_device_type": 2 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "dma_device_id": "system", 00:18:51.891 "dma_device_type": 1 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.891 "dma_device_type": 2 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "dma_device_id": "system", 00:18:51.891 "dma_device_type": 1 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.891 "dma_device_type": 2 00:18:51.891 } 00:18:51.891 ], 00:18:51.891 "driver_specific": { 00:18:51.891 "raid": { 00:18:51.891 "uuid": "d6cf6f0c-072e-42fe-ad0c-4a854c492e55", 00:18:51.891 "strip_size_kb": 64, 00:18:51.891 "state": "online", 00:18:51.891 "raid_level": "raid0", 00:18:51.891 "superblock": true, 00:18:51.891 "num_base_bdevs": 4, 00:18:51.891 "num_base_bdevs_discovered": 4, 00:18:51.891 "num_base_bdevs_operational": 4, 00:18:51.891 "base_bdevs_list": [ 00:18:51.891 { 00:18:51.891 "name": "BaseBdev1", 00:18:51.891 "uuid": "71875e50-e2e2-4336-8f88-92565e6a4f8b", 00:18:51.891 "is_configured": true, 00:18:51.891 "data_offset": 2048, 00:18:51.891 "data_size": 63488 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "name": "BaseBdev2", 00:18:51.891 "uuid": "88f6c505-b755-45d1-bac9-d7784c6cc554", 00:18:51.891 "is_configured": true, 00:18:51.891 "data_offset": 2048, 00:18:51.891 "data_size": 63488 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "name": "BaseBdev3", 00:18:51.891 "uuid": "b52f2d0e-1a09-49c2-ad23-c238733e28ab", 00:18:51.891 "is_configured": true, 00:18:51.891 "data_offset": 2048, 00:18:51.891 "data_size": 63488 00:18:51.891 }, 00:18:51.891 { 00:18:51.891 "name": "BaseBdev4", 00:18:51.891 "uuid": "f4b00cef-83d8-4ce2-9984-db417b5d16ef", 00:18:51.891 "is_configured": true, 00:18:51.891 "data_offset": 2048, 00:18:51.891 "data_size": 63488 00:18:51.891 } 00:18:51.891 ] 00:18:51.891 } 00:18:51.891 } 00:18:51.891 }' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:51.891 BaseBdev2 00:18:51.891 BaseBdev3 00:18:51.891 BaseBdev4' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.891 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.891 [2024-11-20 05:29:23.649131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:51.891 [2024-11-20 05:29:23.649167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.892 [2024-11-20 05:29:23.649225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.892 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.152 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.152 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.152 "name": "Existed_Raid", 00:18:52.152 "uuid": "d6cf6f0c-072e-42fe-ad0c-4a854c492e55", 00:18:52.152 "strip_size_kb": 64, 00:18:52.152 "state": "offline", 00:18:52.152 "raid_level": "raid0", 00:18:52.152 "superblock": true, 00:18:52.152 "num_base_bdevs": 4, 00:18:52.152 "num_base_bdevs_discovered": 3, 00:18:52.152 "num_base_bdevs_operational": 3, 00:18:52.152 "base_bdevs_list": [ 00:18:52.152 { 00:18:52.152 "name": null, 00:18:52.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.152 "is_configured": false, 00:18:52.152 "data_offset": 0, 00:18:52.152 "data_size": 63488 00:18:52.152 }, 00:18:52.152 { 00:18:52.152 "name": "BaseBdev2", 00:18:52.152 "uuid": "88f6c505-b755-45d1-bac9-d7784c6cc554", 00:18:52.152 "is_configured": true, 00:18:52.152 "data_offset": 2048, 00:18:52.152 "data_size": 63488 00:18:52.152 }, 00:18:52.152 { 00:18:52.152 "name": "BaseBdev3", 00:18:52.152 "uuid": "b52f2d0e-1a09-49c2-ad23-c238733e28ab", 00:18:52.152 "is_configured": true, 00:18:52.152 "data_offset": 2048, 00:18:52.152 "data_size": 63488 00:18:52.152 }, 00:18:52.152 { 00:18:52.152 "name": "BaseBdev4", 00:18:52.152 "uuid": "f4b00cef-83d8-4ce2-9984-db417b5d16ef", 00:18:52.152 "is_configured": true, 00:18:52.152 "data_offset": 2048, 00:18:52.152 "data_size": 63488 00:18:52.152 } 00:18:52.152 ] 00:18:52.152 }' 00:18:52.152 05:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.152 05:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.411 [2024-11-20 05:29:24.064492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:52.411 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:52.412 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.412 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.412 [2024-11-20 05:29:24.166730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:52.412 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.412 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:52.412 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:52.412 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.412 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.412 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.412 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.672 [2024-11-20 05:29:24.266563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:52.672 [2024-11-20 05:29:24.266620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.672 BaseBdev2 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.672 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.673 [ 00:18:52.673 { 00:18:52.673 "name": "BaseBdev2", 00:18:52.673 "aliases": [ 00:18:52.673 "609331e8-7b53-41a9-adfd-bd842e9269ad" 00:18:52.673 ], 00:18:52.673 "product_name": "Malloc disk", 00:18:52.673 "block_size": 512, 00:18:52.673 "num_blocks": 65536, 00:18:52.673 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:52.673 "assigned_rate_limits": { 00:18:52.673 "rw_ios_per_sec": 0, 00:18:52.673 "rw_mbytes_per_sec": 0, 00:18:52.673 "r_mbytes_per_sec": 0, 00:18:52.673 "w_mbytes_per_sec": 0 00:18:52.673 }, 00:18:52.673 "claimed": false, 00:18:52.673 "zoned": false, 00:18:52.673 "supported_io_types": { 00:18:52.673 "read": true, 00:18:52.673 "write": true, 00:18:52.673 "unmap": true, 00:18:52.673 "flush": true, 00:18:52.673 "reset": true, 00:18:52.673 "nvme_admin": false, 00:18:52.673 "nvme_io": false, 00:18:52.673 "nvme_io_md": false, 00:18:52.673 "write_zeroes": true, 00:18:52.673 "zcopy": true, 00:18:52.673 "get_zone_info": false, 00:18:52.673 "zone_management": false, 00:18:52.673 "zone_append": false, 00:18:52.673 "compare": false, 00:18:52.673 "compare_and_write": false, 00:18:52.673 "abort": true, 00:18:52.673 "seek_hole": false, 00:18:52.673 "seek_data": false, 00:18:52.673 "copy": true, 00:18:52.673 "nvme_iov_md": false 00:18:52.673 }, 00:18:52.673 "memory_domains": [ 00:18:52.673 { 00:18:52.673 "dma_device_id": "system", 00:18:52.673 "dma_device_type": 1 00:18:52.673 }, 00:18:52.673 { 00:18:52.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.673 "dma_device_type": 2 00:18:52.673 } 00:18:52.673 ], 00:18:52.673 "driver_specific": {} 00:18:52.673 } 00:18:52.673 ] 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.673 BaseBdev3 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.673 [ 00:18:52.673 { 00:18:52.673 "name": "BaseBdev3", 00:18:52.673 "aliases": [ 00:18:52.673 "b23e5f5c-09a3-4467-b84e-c7470cd53233" 00:18:52.673 ], 00:18:52.673 "product_name": "Malloc disk", 00:18:52.673 "block_size": 512, 00:18:52.673 "num_blocks": 65536, 00:18:52.673 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:52.673 "assigned_rate_limits": { 00:18:52.673 "rw_ios_per_sec": 0, 00:18:52.673 "rw_mbytes_per_sec": 0, 00:18:52.673 "r_mbytes_per_sec": 0, 00:18:52.673 "w_mbytes_per_sec": 0 00:18:52.673 }, 00:18:52.673 "claimed": false, 00:18:52.673 "zoned": false, 00:18:52.673 "supported_io_types": { 00:18:52.673 "read": true, 00:18:52.673 "write": true, 00:18:52.673 "unmap": true, 00:18:52.673 "flush": true, 00:18:52.673 "reset": true, 00:18:52.673 "nvme_admin": false, 00:18:52.673 "nvme_io": false, 00:18:52.673 "nvme_io_md": false, 00:18:52.673 "write_zeroes": true, 00:18:52.673 "zcopy": true, 00:18:52.673 "get_zone_info": false, 00:18:52.673 "zone_management": false, 00:18:52.673 "zone_append": false, 00:18:52.673 "compare": false, 00:18:52.673 "compare_and_write": false, 00:18:52.673 "abort": true, 00:18:52.673 "seek_hole": false, 00:18:52.673 "seek_data": false, 00:18:52.673 "copy": true, 00:18:52.673 "nvme_iov_md": false 00:18:52.673 }, 00:18:52.673 "memory_domains": [ 00:18:52.673 { 00:18:52.673 "dma_device_id": "system", 00:18:52.673 "dma_device_type": 1 00:18:52.673 }, 00:18:52.673 { 00:18:52.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.673 "dma_device_type": 2 00:18:52.673 } 00:18:52.673 ], 00:18:52.673 "driver_specific": {} 00:18:52.673 } 00:18:52.673 ] 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.673 BaseBdev4 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:52.673 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.933 [ 00:18:52.933 { 00:18:52.933 "name": "BaseBdev4", 00:18:52.933 "aliases": [ 00:18:52.933 "bed54941-5073-48b2-83af-1a5108fb6a90" 00:18:52.933 ], 00:18:52.933 "product_name": "Malloc disk", 00:18:52.933 "block_size": 512, 00:18:52.933 "num_blocks": 65536, 00:18:52.933 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:52.933 "assigned_rate_limits": { 00:18:52.933 "rw_ios_per_sec": 0, 00:18:52.933 "rw_mbytes_per_sec": 0, 00:18:52.933 "r_mbytes_per_sec": 0, 00:18:52.933 "w_mbytes_per_sec": 0 00:18:52.933 }, 00:18:52.933 "claimed": false, 00:18:52.933 "zoned": false, 00:18:52.933 "supported_io_types": { 00:18:52.933 "read": true, 00:18:52.933 "write": true, 00:18:52.933 "unmap": true, 00:18:52.933 "flush": true, 00:18:52.933 "reset": true, 00:18:52.933 "nvme_admin": false, 00:18:52.933 "nvme_io": false, 00:18:52.933 "nvme_io_md": false, 00:18:52.933 "write_zeroes": true, 00:18:52.933 "zcopy": true, 00:18:52.933 "get_zone_info": false, 00:18:52.933 "zone_management": false, 00:18:52.933 "zone_append": false, 00:18:52.933 "compare": false, 00:18:52.933 "compare_and_write": false, 00:18:52.933 "abort": true, 00:18:52.933 "seek_hole": false, 00:18:52.933 "seek_data": false, 00:18:52.933 "copy": true, 00:18:52.933 "nvme_iov_md": false 00:18:52.933 }, 00:18:52.933 "memory_domains": [ 00:18:52.933 { 00:18:52.933 "dma_device_id": "system", 00:18:52.933 "dma_device_type": 1 00:18:52.933 }, 00:18:52.933 { 00:18:52.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.933 "dma_device_type": 2 00:18:52.933 } 00:18:52.933 ], 00:18:52.933 "driver_specific": {} 00:18:52.933 } 00:18:52.933 ] 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.933 [2024-11-20 05:29:24.526884] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:52.933 [2024-11-20 05:29:24.526935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:52.933 [2024-11-20 05:29:24.526959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.933 [2024-11-20 05:29:24.528949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.933 [2024-11-20 05:29:24.529006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.933 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.934 "name": "Existed_Raid", 00:18:52.934 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:52.934 "strip_size_kb": 64, 00:18:52.934 "state": "configuring", 00:18:52.934 "raid_level": "raid0", 00:18:52.934 "superblock": true, 00:18:52.934 "num_base_bdevs": 4, 00:18:52.934 "num_base_bdevs_discovered": 3, 00:18:52.934 "num_base_bdevs_operational": 4, 00:18:52.934 "base_bdevs_list": [ 00:18:52.934 { 00:18:52.934 "name": "BaseBdev1", 00:18:52.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.934 "is_configured": false, 00:18:52.934 "data_offset": 0, 00:18:52.934 "data_size": 0 00:18:52.934 }, 00:18:52.934 { 00:18:52.934 "name": "BaseBdev2", 00:18:52.934 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:52.934 "is_configured": true, 00:18:52.934 "data_offset": 2048, 00:18:52.934 "data_size": 63488 00:18:52.934 }, 00:18:52.934 { 00:18:52.934 "name": "BaseBdev3", 00:18:52.934 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:52.934 "is_configured": true, 00:18:52.934 "data_offset": 2048, 00:18:52.934 "data_size": 63488 00:18:52.934 }, 00:18:52.934 { 00:18:52.934 "name": "BaseBdev4", 00:18:52.934 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:52.934 "is_configured": true, 00:18:52.934 "data_offset": 2048, 00:18:52.934 "data_size": 63488 00:18:52.934 } 00:18:52.934 ] 00:18:52.934 }' 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.934 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.195 [2024-11-20 05:29:24.890982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.195 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.196 "name": "Existed_Raid", 00:18:53.196 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:53.196 "strip_size_kb": 64, 00:18:53.196 "state": "configuring", 00:18:53.196 "raid_level": "raid0", 00:18:53.196 "superblock": true, 00:18:53.196 "num_base_bdevs": 4, 00:18:53.196 "num_base_bdevs_discovered": 2, 00:18:53.196 "num_base_bdevs_operational": 4, 00:18:53.196 "base_bdevs_list": [ 00:18:53.196 { 00:18:53.196 "name": "BaseBdev1", 00:18:53.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.196 "is_configured": false, 00:18:53.196 "data_offset": 0, 00:18:53.196 "data_size": 0 00:18:53.196 }, 00:18:53.196 { 00:18:53.196 "name": null, 00:18:53.196 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:53.196 "is_configured": false, 00:18:53.196 "data_offset": 0, 00:18:53.196 "data_size": 63488 00:18:53.196 }, 00:18:53.196 { 00:18:53.196 "name": "BaseBdev3", 00:18:53.196 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:53.196 "is_configured": true, 00:18:53.196 "data_offset": 2048, 00:18:53.196 "data_size": 63488 00:18:53.196 }, 00:18:53.196 { 00:18:53.196 "name": "BaseBdev4", 00:18:53.196 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:53.196 "is_configured": true, 00:18:53.196 "data_offset": 2048, 00:18:53.196 "data_size": 63488 00:18:53.196 } 00:18:53.196 ] 00:18:53.196 }' 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.196 05:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.461 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.461 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.461 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.461 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:53.461 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.461 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:53.461 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:53.461 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.461 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.723 [2024-11-20 05:29:25.303807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.723 BaseBdev1 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.723 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.723 [ 00:18:53.723 { 00:18:53.723 "name": "BaseBdev1", 00:18:53.723 "aliases": [ 00:18:53.723 "35412b6a-dd50-4f4b-98e2-ec7ef78aea29" 00:18:53.723 ], 00:18:53.723 "product_name": "Malloc disk", 00:18:53.723 "block_size": 512, 00:18:53.723 "num_blocks": 65536, 00:18:53.724 "uuid": "35412b6a-dd50-4f4b-98e2-ec7ef78aea29", 00:18:53.724 "assigned_rate_limits": { 00:18:53.724 "rw_ios_per_sec": 0, 00:18:53.724 "rw_mbytes_per_sec": 0, 00:18:53.724 "r_mbytes_per_sec": 0, 00:18:53.724 "w_mbytes_per_sec": 0 00:18:53.724 }, 00:18:53.724 "claimed": true, 00:18:53.724 "claim_type": "exclusive_write", 00:18:53.724 "zoned": false, 00:18:53.724 "supported_io_types": { 00:18:53.724 "read": true, 00:18:53.724 "write": true, 00:18:53.724 "unmap": true, 00:18:53.724 "flush": true, 00:18:53.724 "reset": true, 00:18:53.724 "nvme_admin": false, 00:18:53.724 "nvme_io": false, 00:18:53.724 "nvme_io_md": false, 00:18:53.724 "write_zeroes": true, 00:18:53.724 "zcopy": true, 00:18:53.724 "get_zone_info": false, 00:18:53.724 "zone_management": false, 00:18:53.724 "zone_append": false, 00:18:53.724 "compare": false, 00:18:53.724 "compare_and_write": false, 00:18:53.724 "abort": true, 00:18:53.724 "seek_hole": false, 00:18:53.724 "seek_data": false, 00:18:53.724 "copy": true, 00:18:53.724 "nvme_iov_md": false 00:18:53.724 }, 00:18:53.724 "memory_domains": [ 00:18:53.724 { 00:18:53.724 "dma_device_id": "system", 00:18:53.724 "dma_device_type": 1 00:18:53.724 }, 00:18:53.724 { 00:18:53.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.724 "dma_device_type": 2 00:18:53.724 } 00:18:53.724 ], 00:18:53.724 "driver_specific": {} 00:18:53.724 } 00:18:53.724 ] 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.724 "name": "Existed_Raid", 00:18:53.724 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:53.724 "strip_size_kb": 64, 00:18:53.724 "state": "configuring", 00:18:53.724 "raid_level": "raid0", 00:18:53.724 "superblock": true, 00:18:53.724 "num_base_bdevs": 4, 00:18:53.724 "num_base_bdevs_discovered": 3, 00:18:53.724 "num_base_bdevs_operational": 4, 00:18:53.724 "base_bdevs_list": [ 00:18:53.724 { 00:18:53.724 "name": "BaseBdev1", 00:18:53.724 "uuid": "35412b6a-dd50-4f4b-98e2-ec7ef78aea29", 00:18:53.724 "is_configured": true, 00:18:53.724 "data_offset": 2048, 00:18:53.724 "data_size": 63488 00:18:53.724 }, 00:18:53.724 { 00:18:53.724 "name": null, 00:18:53.724 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:53.724 "is_configured": false, 00:18:53.724 "data_offset": 0, 00:18:53.724 "data_size": 63488 00:18:53.724 }, 00:18:53.724 { 00:18:53.724 "name": "BaseBdev3", 00:18:53.724 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:53.724 "is_configured": true, 00:18:53.724 "data_offset": 2048, 00:18:53.724 "data_size": 63488 00:18:53.724 }, 00:18:53.724 { 00:18:53.724 "name": "BaseBdev4", 00:18:53.724 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:53.724 "is_configured": true, 00:18:53.724 "data_offset": 2048, 00:18:53.724 "data_size": 63488 00:18:53.724 } 00:18:53.724 ] 00:18:53.724 }' 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.724 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.983 [2024-11-20 05:29:25.687956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.983 "name": "Existed_Raid", 00:18:53.983 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:53.983 "strip_size_kb": 64, 00:18:53.983 "state": "configuring", 00:18:53.983 "raid_level": "raid0", 00:18:53.983 "superblock": true, 00:18:53.983 "num_base_bdevs": 4, 00:18:53.983 "num_base_bdevs_discovered": 2, 00:18:53.983 "num_base_bdevs_operational": 4, 00:18:53.983 "base_bdevs_list": [ 00:18:53.983 { 00:18:53.983 "name": "BaseBdev1", 00:18:53.983 "uuid": "35412b6a-dd50-4f4b-98e2-ec7ef78aea29", 00:18:53.983 "is_configured": true, 00:18:53.983 "data_offset": 2048, 00:18:53.983 "data_size": 63488 00:18:53.983 }, 00:18:53.983 { 00:18:53.983 "name": null, 00:18:53.983 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:53.983 "is_configured": false, 00:18:53.983 "data_offset": 0, 00:18:53.983 "data_size": 63488 00:18:53.983 }, 00:18:53.983 { 00:18:53.983 "name": null, 00:18:53.983 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:53.983 "is_configured": false, 00:18:53.983 "data_offset": 0, 00:18:53.983 "data_size": 63488 00:18:53.983 }, 00:18:53.983 { 00:18:53.983 "name": "BaseBdev4", 00:18:53.983 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:53.983 "is_configured": true, 00:18:53.983 "data_offset": 2048, 00:18:53.983 "data_size": 63488 00:18:53.983 } 00:18:53.983 ] 00:18:53.983 }' 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.983 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.243 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.243 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.243 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.243 05:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:54.243 05:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.243 [2024-11-20 05:29:26.020055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.243 "name": "Existed_Raid", 00:18:54.243 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:54.243 "strip_size_kb": 64, 00:18:54.243 "state": "configuring", 00:18:54.243 "raid_level": "raid0", 00:18:54.243 "superblock": true, 00:18:54.243 "num_base_bdevs": 4, 00:18:54.243 "num_base_bdevs_discovered": 3, 00:18:54.243 "num_base_bdevs_operational": 4, 00:18:54.243 "base_bdevs_list": [ 00:18:54.243 { 00:18:54.243 "name": "BaseBdev1", 00:18:54.243 "uuid": "35412b6a-dd50-4f4b-98e2-ec7ef78aea29", 00:18:54.243 "is_configured": true, 00:18:54.243 "data_offset": 2048, 00:18:54.243 "data_size": 63488 00:18:54.243 }, 00:18:54.243 { 00:18:54.243 "name": null, 00:18:54.243 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:54.243 "is_configured": false, 00:18:54.243 "data_offset": 0, 00:18:54.243 "data_size": 63488 00:18:54.243 }, 00:18:54.243 { 00:18:54.243 "name": "BaseBdev3", 00:18:54.243 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:54.243 "is_configured": true, 00:18:54.243 "data_offset": 2048, 00:18:54.243 "data_size": 63488 00:18:54.243 }, 00:18:54.243 { 00:18:54.243 "name": "BaseBdev4", 00:18:54.243 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:54.243 "is_configured": true, 00:18:54.243 "data_offset": 2048, 00:18:54.243 "data_size": 63488 00:18:54.243 } 00:18:54.243 ] 00:18:54.243 }' 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.243 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.528 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:54.528 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.528 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.528 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.789 [2024-11-20 05:29:26.388177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.789 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.790 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.790 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.790 "name": "Existed_Raid", 00:18:54.790 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:54.790 "strip_size_kb": 64, 00:18:54.790 "state": "configuring", 00:18:54.790 "raid_level": "raid0", 00:18:54.790 "superblock": true, 00:18:54.790 "num_base_bdevs": 4, 00:18:54.790 "num_base_bdevs_discovered": 2, 00:18:54.790 "num_base_bdevs_operational": 4, 00:18:54.790 "base_bdevs_list": [ 00:18:54.790 { 00:18:54.790 "name": null, 00:18:54.790 "uuid": "35412b6a-dd50-4f4b-98e2-ec7ef78aea29", 00:18:54.790 "is_configured": false, 00:18:54.790 "data_offset": 0, 00:18:54.790 "data_size": 63488 00:18:54.790 }, 00:18:54.790 { 00:18:54.790 "name": null, 00:18:54.790 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:54.790 "is_configured": false, 00:18:54.790 "data_offset": 0, 00:18:54.790 "data_size": 63488 00:18:54.790 }, 00:18:54.790 { 00:18:54.790 "name": "BaseBdev3", 00:18:54.790 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:54.790 "is_configured": true, 00:18:54.790 "data_offset": 2048, 00:18:54.790 "data_size": 63488 00:18:54.790 }, 00:18:54.790 { 00:18:54.790 "name": "BaseBdev4", 00:18:54.790 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:54.790 "is_configured": true, 00:18:54.790 "data_offset": 2048, 00:18:54.790 "data_size": 63488 00:18:54.790 } 00:18:54.790 ] 00:18:54.790 }' 00:18:54.790 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.790 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.048 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:55.048 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.048 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.049 [2024-11-20 05:29:26.798439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.049 "name": "Existed_Raid", 00:18:55.049 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:55.049 "strip_size_kb": 64, 00:18:55.049 "state": "configuring", 00:18:55.049 "raid_level": "raid0", 00:18:55.049 "superblock": true, 00:18:55.049 "num_base_bdevs": 4, 00:18:55.049 "num_base_bdevs_discovered": 3, 00:18:55.049 "num_base_bdevs_operational": 4, 00:18:55.049 "base_bdevs_list": [ 00:18:55.049 { 00:18:55.049 "name": null, 00:18:55.049 "uuid": "35412b6a-dd50-4f4b-98e2-ec7ef78aea29", 00:18:55.049 "is_configured": false, 00:18:55.049 "data_offset": 0, 00:18:55.049 "data_size": 63488 00:18:55.049 }, 00:18:55.049 { 00:18:55.049 "name": "BaseBdev2", 00:18:55.049 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:55.049 "is_configured": true, 00:18:55.049 "data_offset": 2048, 00:18:55.049 "data_size": 63488 00:18:55.049 }, 00:18:55.049 { 00:18:55.049 "name": "BaseBdev3", 00:18:55.049 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:55.049 "is_configured": true, 00:18:55.049 "data_offset": 2048, 00:18:55.049 "data_size": 63488 00:18:55.049 }, 00:18:55.049 { 00:18:55.049 "name": "BaseBdev4", 00:18:55.049 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:55.049 "is_configured": true, 00:18:55.049 "data_offset": 2048, 00:18:55.049 "data_size": 63488 00:18:55.049 } 00:18:55.049 ] 00:18:55.049 }' 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.049 05:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.309 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:55.309 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.309 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.309 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.309 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.568 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:55.568 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.568 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:55.568 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.568 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.568 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.568 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 35412b6a-dd50-4f4b-98e2-ec7ef78aea29 00:18:55.568 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.568 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.568 [2024-11-20 05:29:27.207232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:55.568 [2024-11-20 05:29:27.207488] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:55.568 [2024-11-20 05:29:27.207502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:55.568 [2024-11-20 05:29:27.207789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:55.568 [2024-11-20 05:29:27.207929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:55.568 [2024-11-20 05:29:27.207941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:55.568 NewBaseBdev 00:18:55.569 [2024-11-20 05:29:27.208061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 [ 00:18:55.569 { 00:18:55.569 "name": "NewBaseBdev", 00:18:55.569 "aliases": [ 00:18:55.569 "35412b6a-dd50-4f4b-98e2-ec7ef78aea29" 00:18:55.569 ], 00:18:55.569 "product_name": "Malloc disk", 00:18:55.569 "block_size": 512, 00:18:55.569 "num_blocks": 65536, 00:18:55.569 "uuid": "35412b6a-dd50-4f4b-98e2-ec7ef78aea29", 00:18:55.569 "assigned_rate_limits": { 00:18:55.569 "rw_ios_per_sec": 0, 00:18:55.569 "rw_mbytes_per_sec": 0, 00:18:55.569 "r_mbytes_per_sec": 0, 00:18:55.569 "w_mbytes_per_sec": 0 00:18:55.569 }, 00:18:55.569 "claimed": true, 00:18:55.569 "claim_type": "exclusive_write", 00:18:55.569 "zoned": false, 00:18:55.569 "supported_io_types": { 00:18:55.569 "read": true, 00:18:55.569 "write": true, 00:18:55.569 "unmap": true, 00:18:55.569 "flush": true, 00:18:55.569 "reset": true, 00:18:55.569 "nvme_admin": false, 00:18:55.569 "nvme_io": false, 00:18:55.569 "nvme_io_md": false, 00:18:55.569 "write_zeroes": true, 00:18:55.569 "zcopy": true, 00:18:55.569 "get_zone_info": false, 00:18:55.569 "zone_management": false, 00:18:55.569 "zone_append": false, 00:18:55.569 "compare": false, 00:18:55.569 "compare_and_write": false, 00:18:55.569 "abort": true, 00:18:55.569 "seek_hole": false, 00:18:55.569 "seek_data": false, 00:18:55.569 "copy": true, 00:18:55.569 "nvme_iov_md": false 00:18:55.569 }, 00:18:55.569 "memory_domains": [ 00:18:55.569 { 00:18:55.569 "dma_device_id": "system", 00:18:55.569 "dma_device_type": 1 00:18:55.569 }, 00:18:55.569 { 00:18:55.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.569 "dma_device_type": 2 00:18:55.569 } 00:18:55.569 ], 00:18:55.569 "driver_specific": {} 00:18:55.569 } 00:18:55.569 ] 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.569 "name": "Existed_Raid", 00:18:55.569 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:55.569 "strip_size_kb": 64, 00:18:55.569 "state": "online", 00:18:55.569 "raid_level": "raid0", 00:18:55.569 "superblock": true, 00:18:55.569 "num_base_bdevs": 4, 00:18:55.569 "num_base_bdevs_discovered": 4, 00:18:55.569 "num_base_bdevs_operational": 4, 00:18:55.569 "base_bdevs_list": [ 00:18:55.569 { 00:18:55.569 "name": "NewBaseBdev", 00:18:55.569 "uuid": "35412b6a-dd50-4f4b-98e2-ec7ef78aea29", 00:18:55.569 "is_configured": true, 00:18:55.569 "data_offset": 2048, 00:18:55.569 "data_size": 63488 00:18:55.569 }, 00:18:55.569 { 00:18:55.569 "name": "BaseBdev2", 00:18:55.569 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:55.569 "is_configured": true, 00:18:55.569 "data_offset": 2048, 00:18:55.569 "data_size": 63488 00:18:55.569 }, 00:18:55.569 { 00:18:55.569 "name": "BaseBdev3", 00:18:55.569 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:55.569 "is_configured": true, 00:18:55.569 "data_offset": 2048, 00:18:55.569 "data_size": 63488 00:18:55.569 }, 00:18:55.569 { 00:18:55.569 "name": "BaseBdev4", 00:18:55.569 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:55.569 "is_configured": true, 00:18:55.569 "data_offset": 2048, 00:18:55.569 "data_size": 63488 00:18:55.569 } 00:18:55.569 ] 00:18:55.569 }' 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.569 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.830 [2024-11-20 05:29:27.528007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:55.830 "name": "Existed_Raid", 00:18:55.830 "aliases": [ 00:18:55.830 "261c32be-f61f-451c-b3e0-bf94e80da010" 00:18:55.830 ], 00:18:55.830 "product_name": "Raid Volume", 00:18:55.830 "block_size": 512, 00:18:55.830 "num_blocks": 253952, 00:18:55.830 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:55.830 "assigned_rate_limits": { 00:18:55.830 "rw_ios_per_sec": 0, 00:18:55.830 "rw_mbytes_per_sec": 0, 00:18:55.830 "r_mbytes_per_sec": 0, 00:18:55.830 "w_mbytes_per_sec": 0 00:18:55.830 }, 00:18:55.830 "claimed": false, 00:18:55.830 "zoned": false, 00:18:55.830 "supported_io_types": { 00:18:55.830 "read": true, 00:18:55.830 "write": true, 00:18:55.830 "unmap": true, 00:18:55.830 "flush": true, 00:18:55.830 "reset": true, 00:18:55.830 "nvme_admin": false, 00:18:55.830 "nvme_io": false, 00:18:55.830 "nvme_io_md": false, 00:18:55.830 "write_zeroes": true, 00:18:55.830 "zcopy": false, 00:18:55.830 "get_zone_info": false, 00:18:55.830 "zone_management": false, 00:18:55.830 "zone_append": false, 00:18:55.830 "compare": false, 00:18:55.830 "compare_and_write": false, 00:18:55.830 "abort": false, 00:18:55.830 "seek_hole": false, 00:18:55.830 "seek_data": false, 00:18:55.830 "copy": false, 00:18:55.830 "nvme_iov_md": false 00:18:55.830 }, 00:18:55.830 "memory_domains": [ 00:18:55.830 { 00:18:55.830 "dma_device_id": "system", 00:18:55.830 "dma_device_type": 1 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.830 "dma_device_type": 2 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "dma_device_id": "system", 00:18:55.830 "dma_device_type": 1 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.830 "dma_device_type": 2 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "dma_device_id": "system", 00:18:55.830 "dma_device_type": 1 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.830 "dma_device_type": 2 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "dma_device_id": "system", 00:18:55.830 "dma_device_type": 1 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.830 "dma_device_type": 2 00:18:55.830 } 00:18:55.830 ], 00:18:55.830 "driver_specific": { 00:18:55.830 "raid": { 00:18:55.830 "uuid": "261c32be-f61f-451c-b3e0-bf94e80da010", 00:18:55.830 "strip_size_kb": 64, 00:18:55.830 "state": "online", 00:18:55.830 "raid_level": "raid0", 00:18:55.830 "superblock": true, 00:18:55.830 "num_base_bdevs": 4, 00:18:55.830 "num_base_bdevs_discovered": 4, 00:18:55.830 "num_base_bdevs_operational": 4, 00:18:55.830 "base_bdevs_list": [ 00:18:55.830 { 00:18:55.830 "name": "NewBaseBdev", 00:18:55.830 "uuid": "35412b6a-dd50-4f4b-98e2-ec7ef78aea29", 00:18:55.830 "is_configured": true, 00:18:55.830 "data_offset": 2048, 00:18:55.830 "data_size": 63488 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "name": "BaseBdev2", 00:18:55.830 "uuid": "609331e8-7b53-41a9-adfd-bd842e9269ad", 00:18:55.830 "is_configured": true, 00:18:55.830 "data_offset": 2048, 00:18:55.830 "data_size": 63488 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "name": "BaseBdev3", 00:18:55.830 "uuid": "b23e5f5c-09a3-4467-b84e-c7470cd53233", 00:18:55.830 "is_configured": true, 00:18:55.830 "data_offset": 2048, 00:18:55.830 "data_size": 63488 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "name": "BaseBdev4", 00:18:55.830 "uuid": "bed54941-5073-48b2-83af-1a5108fb6a90", 00:18:55.830 "is_configured": true, 00:18:55.830 "data_offset": 2048, 00:18:55.830 "data_size": 63488 00:18:55.830 } 00:18:55.830 ] 00:18:55.830 } 00:18:55.830 } 00:18:55.830 }' 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:55.830 BaseBdev2 00:18:55.830 BaseBdev3 00:18:55.830 BaseBdev4' 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.830 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.089 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.090 [2024-11-20 05:29:27.747432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:56.090 [2024-11-20 05:29:27.747468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.090 [2024-11-20 05:29:27.747554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.090 [2024-11-20 05:29:27.747631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.090 [2024-11-20 05:29:27.747641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68361 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68361 ']' 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68361 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68361 00:18:56.090 killing process with pid 68361 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68361' 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68361 00:18:56.090 [2024-11-20 05:29:27.779387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.090 05:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68361 00:18:56.349 [2024-11-20 05:29:28.042417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.290 ************************************ 00:18:57.290 END TEST raid_state_function_test_sb 00:18:57.290 ************************************ 00:18:57.290 05:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:57.290 00:18:57.290 real 0m8.440s 00:18:57.290 user 0m13.227s 00:18:57.290 sys 0m1.483s 00:18:57.290 05:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:57.290 05:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.290 05:29:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:57.290 05:29:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:57.290 05:29:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:57.290 05:29:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.290 ************************************ 00:18:57.290 START TEST raid_superblock_test 00:18:57.290 ************************************ 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69004 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69004 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 69004 ']' 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.290 05:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.290 [2024-11-20 05:29:28.962670] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:18:57.290 [2024-11-20 05:29:28.963335] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69004 ] 00:18:57.550 [2024-11-20 05:29:29.127040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.550 [2024-11-20 05:29:29.244667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.811 [2024-11-20 05:29:29.393557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.811 [2024-11-20 05:29:29.393629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.073 malloc1 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.073 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.073 [2024-11-20 05:29:29.859494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:58.073 [2024-11-20 05:29:29.859566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.073 [2024-11-20 05:29:29.859591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:58.073 [2024-11-20 05:29:29.859602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.073 [2024-11-20 05:29:29.861987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.074 [2024-11-20 05:29:29.862025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:58.074 pt1 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.074 malloc2 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.074 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.336 [2024-11-20 05:29:29.905960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:58.336 [2024-11-20 05:29:29.906017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.336 [2024-11-20 05:29:29.906040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:58.336 [2024-11-20 05:29:29.906051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.336 [2024-11-20 05:29:29.908286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.336 [2024-11-20 05:29:29.908320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:58.336 pt2 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.336 malloc3 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.336 [2024-11-20 05:29:29.970896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:58.336 [2024-11-20 05:29:29.970952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.336 [2024-11-20 05:29:29.970977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:58.336 [2024-11-20 05:29:29.970989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.336 [2024-11-20 05:29:29.973274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.336 [2024-11-20 05:29:29.973309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:58.336 pt3 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.336 05:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.336 malloc4 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.336 [2024-11-20 05:29:30.013117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:58.336 [2024-11-20 05:29:30.013162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.336 [2024-11-20 05:29:30.013179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:58.336 [2024-11-20 05:29:30.013188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.336 [2024-11-20 05:29:30.015394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.336 [2024-11-20 05:29:30.015424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:58.336 pt4 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.336 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.336 [2024-11-20 05:29:30.021152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:58.336 [2024-11-20 05:29:30.023104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.336 [2024-11-20 05:29:30.023171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:58.336 [2024-11-20 05:29:30.023238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:58.336 [2024-11-20 05:29:30.023451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:58.336 [2024-11-20 05:29:30.023462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:58.337 [2024-11-20 05:29:30.023728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:58.337 [2024-11-20 05:29:30.023888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:58.337 [2024-11-20 05:29:30.023899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:58.337 [2024-11-20 05:29:30.024032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.337 "name": "raid_bdev1", 00:18:58.337 "uuid": "d95c1c6a-c23b-4b3a-9598-be783b20b835", 00:18:58.337 "strip_size_kb": 64, 00:18:58.337 "state": "online", 00:18:58.337 "raid_level": "raid0", 00:18:58.337 "superblock": true, 00:18:58.337 "num_base_bdevs": 4, 00:18:58.337 "num_base_bdevs_discovered": 4, 00:18:58.337 "num_base_bdevs_operational": 4, 00:18:58.337 "base_bdevs_list": [ 00:18:58.337 { 00:18:58.337 "name": "pt1", 00:18:58.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.337 "is_configured": true, 00:18:58.337 "data_offset": 2048, 00:18:58.337 "data_size": 63488 00:18:58.337 }, 00:18:58.337 { 00:18:58.337 "name": "pt2", 00:18:58.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.337 "is_configured": true, 00:18:58.337 "data_offset": 2048, 00:18:58.337 "data_size": 63488 00:18:58.337 }, 00:18:58.337 { 00:18:58.337 "name": "pt3", 00:18:58.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:58.337 "is_configured": true, 00:18:58.337 "data_offset": 2048, 00:18:58.337 "data_size": 63488 00:18:58.337 }, 00:18:58.337 { 00:18:58.337 "name": "pt4", 00:18:58.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:58.337 "is_configured": true, 00:18:58.337 "data_offset": 2048, 00:18:58.337 "data_size": 63488 00:18:58.337 } 00:18:58.337 ] 00:18:58.337 }' 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.337 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:58.599 [2024-11-20 05:29:30.333615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:58.599 "name": "raid_bdev1", 00:18:58.599 "aliases": [ 00:18:58.599 "d95c1c6a-c23b-4b3a-9598-be783b20b835" 00:18:58.599 ], 00:18:58.599 "product_name": "Raid Volume", 00:18:58.599 "block_size": 512, 00:18:58.599 "num_blocks": 253952, 00:18:58.599 "uuid": "d95c1c6a-c23b-4b3a-9598-be783b20b835", 00:18:58.599 "assigned_rate_limits": { 00:18:58.599 "rw_ios_per_sec": 0, 00:18:58.599 "rw_mbytes_per_sec": 0, 00:18:58.599 "r_mbytes_per_sec": 0, 00:18:58.599 "w_mbytes_per_sec": 0 00:18:58.599 }, 00:18:58.599 "claimed": false, 00:18:58.599 "zoned": false, 00:18:58.599 "supported_io_types": { 00:18:58.599 "read": true, 00:18:58.599 "write": true, 00:18:58.599 "unmap": true, 00:18:58.599 "flush": true, 00:18:58.599 "reset": true, 00:18:58.599 "nvme_admin": false, 00:18:58.599 "nvme_io": false, 00:18:58.599 "nvme_io_md": false, 00:18:58.599 "write_zeroes": true, 00:18:58.599 "zcopy": false, 00:18:58.599 "get_zone_info": false, 00:18:58.599 "zone_management": false, 00:18:58.599 "zone_append": false, 00:18:58.599 "compare": false, 00:18:58.599 "compare_and_write": false, 00:18:58.599 "abort": false, 00:18:58.599 "seek_hole": false, 00:18:58.599 "seek_data": false, 00:18:58.599 "copy": false, 00:18:58.599 "nvme_iov_md": false 00:18:58.599 }, 00:18:58.599 "memory_domains": [ 00:18:58.599 { 00:18:58.599 "dma_device_id": "system", 00:18:58.599 "dma_device_type": 1 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.599 "dma_device_type": 2 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "dma_device_id": "system", 00:18:58.599 "dma_device_type": 1 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.599 "dma_device_type": 2 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "dma_device_id": "system", 00:18:58.599 "dma_device_type": 1 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.599 "dma_device_type": 2 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "dma_device_id": "system", 00:18:58.599 "dma_device_type": 1 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.599 "dma_device_type": 2 00:18:58.599 } 00:18:58.599 ], 00:18:58.599 "driver_specific": { 00:18:58.599 "raid": { 00:18:58.599 "uuid": "d95c1c6a-c23b-4b3a-9598-be783b20b835", 00:18:58.599 "strip_size_kb": 64, 00:18:58.599 "state": "online", 00:18:58.599 "raid_level": "raid0", 00:18:58.599 "superblock": true, 00:18:58.599 "num_base_bdevs": 4, 00:18:58.599 "num_base_bdevs_discovered": 4, 00:18:58.599 "num_base_bdevs_operational": 4, 00:18:58.599 "base_bdevs_list": [ 00:18:58.599 { 00:18:58.599 "name": "pt1", 00:18:58.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.599 "is_configured": true, 00:18:58.599 "data_offset": 2048, 00:18:58.599 "data_size": 63488 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "name": "pt2", 00:18:58.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.599 "is_configured": true, 00:18:58.599 "data_offset": 2048, 00:18:58.599 "data_size": 63488 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "name": "pt3", 00:18:58.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:58.599 "is_configured": true, 00:18:58.599 "data_offset": 2048, 00:18:58.599 "data_size": 63488 00:18:58.599 }, 00:18:58.599 { 00:18:58.599 "name": "pt4", 00:18:58.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:58.599 "is_configured": true, 00:18:58.599 "data_offset": 2048, 00:18:58.599 "data_size": 63488 00:18:58.599 } 00:18:58.599 ] 00:18:58.599 } 00:18:58.599 } 00:18:58.599 }' 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:58.599 pt2 00:18:58.599 pt3 00:18:58.599 pt4' 00:18:58.599 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.859 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:58.859 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.859 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:58.859 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.859 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.859 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.859 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.859 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.859 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:58.860 [2024-11-20 05:29:30.589622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d95c1c6a-c23b-4b3a-9598-be783b20b835 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d95c1c6a-c23b-4b3a-9598-be783b20b835 ']' 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 [2024-11-20 05:29:30.617265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.860 [2024-11-20 05:29:30.617293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.860 [2024-11-20 05:29:30.617386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.860 [2024-11-20 05:29:30.617464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.860 [2024-11-20 05:29:30.617480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.860 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.122 [2024-11-20 05:29:30.729338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:59.122 [2024-11-20 05:29:30.731413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:59.122 [2024-11-20 05:29:30.731469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:59.122 [2024-11-20 05:29:30.731505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:59.122 [2024-11-20 05:29:30.731557] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:59.122 [2024-11-20 05:29:30.731608] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:59.122 [2024-11-20 05:29:30.731628] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:59.122 [2024-11-20 05:29:30.731647] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:59.122 [2024-11-20 05:29:30.731660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.122 [2024-11-20 05:29:30.731674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:59.122 request: 00:18:59.122 { 00:18:59.122 "name": "raid_bdev1", 00:18:59.122 "raid_level": "raid0", 00:18:59.122 "base_bdevs": [ 00:18:59.122 "malloc1", 00:18:59.122 "malloc2", 00:18:59.122 "malloc3", 00:18:59.122 "malloc4" 00:18:59.122 ], 00:18:59.122 "strip_size_kb": 64, 00:18:59.122 "superblock": false, 00:18:59.122 "method": "bdev_raid_create", 00:18:59.122 "req_id": 1 00:18:59.122 } 00:18:59.122 Got JSON-RPC error response 00:18:59.122 response: 00:18:59.122 { 00:18:59.122 "code": -17, 00:18:59.122 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:59.122 } 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.122 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.122 [2024-11-20 05:29:30.769312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:59.122 [2024-11-20 05:29:30.769388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.122 [2024-11-20 05:29:30.769408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:59.122 [2024-11-20 05:29:30.769421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.122 [2024-11-20 05:29:30.771772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.122 [2024-11-20 05:29:30.771813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:59.122 [2024-11-20 05:29:30.771895] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:59.123 [2024-11-20 05:29:30.771958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:59.123 pt1 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.123 "name": "raid_bdev1", 00:18:59.123 "uuid": "d95c1c6a-c23b-4b3a-9598-be783b20b835", 00:18:59.123 "strip_size_kb": 64, 00:18:59.123 "state": "configuring", 00:18:59.123 "raid_level": "raid0", 00:18:59.123 "superblock": true, 00:18:59.123 "num_base_bdevs": 4, 00:18:59.123 "num_base_bdevs_discovered": 1, 00:18:59.123 "num_base_bdevs_operational": 4, 00:18:59.123 "base_bdevs_list": [ 00:18:59.123 { 00:18:59.123 "name": "pt1", 00:18:59.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.123 "is_configured": true, 00:18:59.123 "data_offset": 2048, 00:18:59.123 "data_size": 63488 00:18:59.123 }, 00:18:59.123 { 00:18:59.123 "name": null, 00:18:59.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.123 "is_configured": false, 00:18:59.123 "data_offset": 2048, 00:18:59.123 "data_size": 63488 00:18:59.123 }, 00:18:59.123 { 00:18:59.123 "name": null, 00:18:59.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.123 "is_configured": false, 00:18:59.123 "data_offset": 2048, 00:18:59.123 "data_size": 63488 00:18:59.123 }, 00:18:59.123 { 00:18:59.123 "name": null, 00:18:59.123 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.123 "is_configured": false, 00:18:59.123 "data_offset": 2048, 00:18:59.123 "data_size": 63488 00:18:59.123 } 00:18:59.123 ] 00:18:59.123 }' 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.123 05:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.384 [2024-11-20 05:29:31.073440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:59.384 [2024-11-20 05:29:31.073520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.384 [2024-11-20 05:29:31.073543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:59.384 [2024-11-20 05:29:31.073554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.384 [2024-11-20 05:29:31.074023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.384 [2024-11-20 05:29:31.074041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:59.384 [2024-11-20 05:29:31.074126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:59.384 [2024-11-20 05:29:31.074151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:59.384 pt2 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.384 [2024-11-20 05:29:31.081424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.384 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.384 "name": "raid_bdev1", 00:18:59.384 "uuid": "d95c1c6a-c23b-4b3a-9598-be783b20b835", 00:18:59.384 "strip_size_kb": 64, 00:18:59.384 "state": "configuring", 00:18:59.384 "raid_level": "raid0", 00:18:59.384 "superblock": true, 00:18:59.384 "num_base_bdevs": 4, 00:18:59.384 "num_base_bdevs_discovered": 1, 00:18:59.384 "num_base_bdevs_operational": 4, 00:18:59.384 "base_bdevs_list": [ 00:18:59.384 { 00:18:59.385 "name": "pt1", 00:18:59.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.385 "is_configured": true, 00:18:59.385 "data_offset": 2048, 00:18:59.385 "data_size": 63488 00:18:59.385 }, 00:18:59.385 { 00:18:59.385 "name": null, 00:18:59.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.385 "is_configured": false, 00:18:59.385 "data_offset": 0, 00:18:59.385 "data_size": 63488 00:18:59.385 }, 00:18:59.385 { 00:18:59.385 "name": null, 00:18:59.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.385 "is_configured": false, 00:18:59.385 "data_offset": 2048, 00:18:59.385 "data_size": 63488 00:18:59.385 }, 00:18:59.385 { 00:18:59.385 "name": null, 00:18:59.385 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.385 "is_configured": false, 00:18:59.385 "data_offset": 2048, 00:18:59.385 "data_size": 63488 00:18:59.385 } 00:18:59.385 ] 00:18:59.385 }' 00:18:59.385 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.385 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.646 [2024-11-20 05:29:31.393514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:59.646 [2024-11-20 05:29:31.393589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.646 [2024-11-20 05:29:31.393612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:59.646 [2024-11-20 05:29:31.393622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.646 [2024-11-20 05:29:31.394113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.646 [2024-11-20 05:29:31.394140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:59.646 [2024-11-20 05:29:31.394228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:59.646 [2024-11-20 05:29:31.394250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:59.646 pt2 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.646 [2024-11-20 05:29:31.401481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:59.646 [2024-11-20 05:29:31.401534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.646 [2024-11-20 05:29:31.401559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:59.646 [2024-11-20 05:29:31.401568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.646 [2024-11-20 05:29:31.401985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.646 [2024-11-20 05:29:31.401999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:59.646 [2024-11-20 05:29:31.402068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:59.646 [2024-11-20 05:29:31.402089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:59.646 pt3 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.646 [2024-11-20 05:29:31.413458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:59.646 [2024-11-20 05:29:31.413507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.646 [2024-11-20 05:29:31.413526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:59.646 [2024-11-20 05:29:31.413535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.646 [2024-11-20 05:29:31.413929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.646 [2024-11-20 05:29:31.413956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:59.646 [2024-11-20 05:29:31.414016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:59.646 [2024-11-20 05:29:31.414034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:59.646 [2024-11-20 05:29:31.414170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:59.646 [2024-11-20 05:29:31.414180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:59.646 [2024-11-20 05:29:31.414458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:59.646 [2024-11-20 05:29:31.414598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:59.646 [2024-11-20 05:29:31.414608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:59.646 [2024-11-20 05:29:31.414735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.646 pt4 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.646 "name": "raid_bdev1", 00:18:59.646 "uuid": "d95c1c6a-c23b-4b3a-9598-be783b20b835", 00:18:59.646 "strip_size_kb": 64, 00:18:59.646 "state": "online", 00:18:59.646 "raid_level": "raid0", 00:18:59.646 "superblock": true, 00:18:59.646 "num_base_bdevs": 4, 00:18:59.646 "num_base_bdevs_discovered": 4, 00:18:59.646 "num_base_bdevs_operational": 4, 00:18:59.646 "base_bdevs_list": [ 00:18:59.646 { 00:18:59.646 "name": "pt1", 00:18:59.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.646 "is_configured": true, 00:18:59.646 "data_offset": 2048, 00:18:59.646 "data_size": 63488 00:18:59.646 }, 00:18:59.646 { 00:18:59.646 "name": "pt2", 00:18:59.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.646 "is_configured": true, 00:18:59.646 "data_offset": 2048, 00:18:59.646 "data_size": 63488 00:18:59.646 }, 00:18:59.646 { 00:18:59.646 "name": "pt3", 00:18:59.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.646 "is_configured": true, 00:18:59.646 "data_offset": 2048, 00:18:59.646 "data_size": 63488 00:18:59.646 }, 00:18:59.646 { 00:18:59.646 "name": "pt4", 00:18:59.646 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.646 "is_configured": true, 00:18:59.646 "data_offset": 2048, 00:18:59.646 "data_size": 63488 00:18:59.646 } 00:18:59.646 ] 00:18:59.646 }' 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.646 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.906 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:00.168 [2024-11-20 05:29:31.745973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:00.168 "name": "raid_bdev1", 00:19:00.168 "aliases": [ 00:19:00.168 "d95c1c6a-c23b-4b3a-9598-be783b20b835" 00:19:00.168 ], 00:19:00.168 "product_name": "Raid Volume", 00:19:00.168 "block_size": 512, 00:19:00.168 "num_blocks": 253952, 00:19:00.168 "uuid": "d95c1c6a-c23b-4b3a-9598-be783b20b835", 00:19:00.168 "assigned_rate_limits": { 00:19:00.168 "rw_ios_per_sec": 0, 00:19:00.168 "rw_mbytes_per_sec": 0, 00:19:00.168 "r_mbytes_per_sec": 0, 00:19:00.168 "w_mbytes_per_sec": 0 00:19:00.168 }, 00:19:00.168 "claimed": false, 00:19:00.168 "zoned": false, 00:19:00.168 "supported_io_types": { 00:19:00.168 "read": true, 00:19:00.168 "write": true, 00:19:00.168 "unmap": true, 00:19:00.168 "flush": true, 00:19:00.168 "reset": true, 00:19:00.168 "nvme_admin": false, 00:19:00.168 "nvme_io": false, 00:19:00.168 "nvme_io_md": false, 00:19:00.168 "write_zeroes": true, 00:19:00.168 "zcopy": false, 00:19:00.168 "get_zone_info": false, 00:19:00.168 "zone_management": false, 00:19:00.168 "zone_append": false, 00:19:00.168 "compare": false, 00:19:00.168 "compare_and_write": false, 00:19:00.168 "abort": false, 00:19:00.168 "seek_hole": false, 00:19:00.168 "seek_data": false, 00:19:00.168 "copy": false, 00:19:00.168 "nvme_iov_md": false 00:19:00.168 }, 00:19:00.168 "memory_domains": [ 00:19:00.168 { 00:19:00.168 "dma_device_id": "system", 00:19:00.168 "dma_device_type": 1 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.168 "dma_device_type": 2 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "dma_device_id": "system", 00:19:00.168 "dma_device_type": 1 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.168 "dma_device_type": 2 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "dma_device_id": "system", 00:19:00.168 "dma_device_type": 1 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.168 "dma_device_type": 2 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "dma_device_id": "system", 00:19:00.168 "dma_device_type": 1 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.168 "dma_device_type": 2 00:19:00.168 } 00:19:00.168 ], 00:19:00.168 "driver_specific": { 00:19:00.168 "raid": { 00:19:00.168 "uuid": "d95c1c6a-c23b-4b3a-9598-be783b20b835", 00:19:00.168 "strip_size_kb": 64, 00:19:00.168 "state": "online", 00:19:00.168 "raid_level": "raid0", 00:19:00.168 "superblock": true, 00:19:00.168 "num_base_bdevs": 4, 00:19:00.168 "num_base_bdevs_discovered": 4, 00:19:00.168 "num_base_bdevs_operational": 4, 00:19:00.168 "base_bdevs_list": [ 00:19:00.168 { 00:19:00.168 "name": "pt1", 00:19:00.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:00.168 "is_configured": true, 00:19:00.168 "data_offset": 2048, 00:19:00.168 "data_size": 63488 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "name": "pt2", 00:19:00.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.168 "is_configured": true, 00:19:00.168 "data_offset": 2048, 00:19:00.168 "data_size": 63488 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "name": "pt3", 00:19:00.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:00.168 "is_configured": true, 00:19:00.168 "data_offset": 2048, 00:19:00.168 "data_size": 63488 00:19:00.168 }, 00:19:00.168 { 00:19:00.168 "name": "pt4", 00:19:00.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:00.168 "is_configured": true, 00:19:00.168 "data_offset": 2048, 00:19:00.168 "data_size": 63488 00:19:00.168 } 00:19:00.168 ] 00:19:00.168 } 00:19:00.168 } 00:19:00.168 }' 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:00.168 pt2 00:19:00.168 pt3 00:19:00.168 pt4' 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.168 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:00.169 [2024-11-20 05:29:31.965933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d95c1c6a-c23b-4b3a-9598-be783b20b835 '!=' d95c1c6a-c23b-4b3a-9598-be783b20b835 ']' 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69004 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 69004 ']' 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 69004 00:19:00.169 05:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:19:00.430 05:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:00.430 05:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69004 00:19:00.430 killing process with pid 69004 00:19:00.430 05:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:00.430 05:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:00.430 05:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69004' 00:19:00.430 05:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 69004 00:19:00.430 [2024-11-20 05:29:32.022553] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.430 05:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 69004 00:19:00.430 [2024-11-20 05:29:32.022651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.430 [2024-11-20 05:29:32.022733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.430 [2024-11-20 05:29:32.022743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:00.691 [2024-11-20 05:29:32.284306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.263 ************************************ 00:19:01.263 END TEST raid_superblock_test 00:19:01.263 ************************************ 00:19:01.263 05:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:01.263 00:19:01.263 real 0m4.138s 00:19:01.263 user 0m5.820s 00:19:01.263 sys 0m0.696s 00:19:01.263 05:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:01.263 05:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.263 05:29:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:19:01.263 05:29:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:01.263 05:29:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:01.263 05:29:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.263 ************************************ 00:19:01.263 START TEST raid_read_error_test 00:19:01.263 ************************************ 00:19:01.263 05:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:19:01.263 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:19:01.263 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:01.263 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:01.525 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:01.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YJqSSitBmk 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69252 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69252 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69252 ']' 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:01.526 05:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.526 [2024-11-20 05:29:33.178404] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:01.526 [2024-11-20 05:29:33.178544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69252 ] 00:19:01.526 [2024-11-20 05:29:33.334290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.788 [2024-11-20 05:29:33.450747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.788 [2024-11-20 05:29:33.606826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.788 [2024-11-20 05:29:33.606883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.357 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:02.357 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:02.357 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:02.357 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:02.357 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 BaseBdev1_malloc 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 true 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 [2024-11-20 05:29:34.072646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:02.358 [2024-11-20 05:29:34.072713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.358 [2024-11-20 05:29:34.072735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:02.358 [2024-11-20 05:29:34.072747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.358 [2024-11-20 05:29:34.075002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.358 [2024-11-20 05:29:34.075200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:02.358 BaseBdev1 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 BaseBdev2_malloc 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 true 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 [2024-11-20 05:29:34.118590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:02.358 [2024-11-20 05:29:34.118656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.358 [2024-11-20 05:29:34.118674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:02.358 [2024-11-20 05:29:34.118684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.358 [2024-11-20 05:29:34.120926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.358 [2024-11-20 05:29:34.120964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:02.358 BaseBdev2 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 BaseBdev3_malloc 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 true 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.358 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 [2024-11-20 05:29:34.187939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:02.358 [2024-11-20 05:29:34.188006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.358 [2024-11-20 05:29:34.188026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:02.358 [2024-11-20 05:29:34.188038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.619 [2024-11-20 05:29:34.190354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.619 [2024-11-20 05:29:34.190405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:02.619 BaseBdev3 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.619 BaseBdev4_malloc 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.619 true 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.619 [2024-11-20 05:29:34.234452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:02.619 [2024-11-20 05:29:34.234509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.619 [2024-11-20 05:29:34.234529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:02.619 [2024-11-20 05:29:34.234540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.619 [2024-11-20 05:29:34.236773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.619 [2024-11-20 05:29:34.236811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:02.619 BaseBdev4 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.619 [2024-11-20 05:29:34.242524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.619 [2024-11-20 05:29:34.244493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.619 [2024-11-20 05:29:34.244573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:02.619 [2024-11-20 05:29:34.244641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:02.619 [2024-11-20 05:29:34.244870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:02.619 [2024-11-20 05:29:34.244887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:02.619 [2024-11-20 05:29:34.245143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:02.619 [2024-11-20 05:29:34.245292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:02.619 [2024-11-20 05:29:34.245303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:02.619 [2024-11-20 05:29:34.245468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.619 "name": "raid_bdev1", 00:19:02.619 "uuid": "8aeca3f6-2e83-4253-8a2a-badb6fcc6df3", 00:19:02.619 "strip_size_kb": 64, 00:19:02.619 "state": "online", 00:19:02.619 "raid_level": "raid0", 00:19:02.619 "superblock": true, 00:19:02.619 "num_base_bdevs": 4, 00:19:02.619 "num_base_bdevs_discovered": 4, 00:19:02.619 "num_base_bdevs_operational": 4, 00:19:02.619 "base_bdevs_list": [ 00:19:02.619 { 00:19:02.619 "name": "BaseBdev1", 00:19:02.619 "uuid": "9525f8fc-7067-5984-9706-ed869550381a", 00:19:02.619 "is_configured": true, 00:19:02.619 "data_offset": 2048, 00:19:02.619 "data_size": 63488 00:19:02.619 }, 00:19:02.619 { 00:19:02.619 "name": "BaseBdev2", 00:19:02.619 "uuid": "10fc8667-4336-59ba-89b2-feeabf8713bd", 00:19:02.619 "is_configured": true, 00:19:02.619 "data_offset": 2048, 00:19:02.619 "data_size": 63488 00:19:02.619 }, 00:19:02.619 { 00:19:02.619 "name": "BaseBdev3", 00:19:02.619 "uuid": "7d1e5442-7c0b-5ae5-ad74-eae5c7c8ede6", 00:19:02.619 "is_configured": true, 00:19:02.619 "data_offset": 2048, 00:19:02.619 "data_size": 63488 00:19:02.619 }, 00:19:02.619 { 00:19:02.619 "name": "BaseBdev4", 00:19:02.619 "uuid": "bba2806a-ce78-5476-908c-43ffaa05a008", 00:19:02.619 "is_configured": true, 00:19:02.619 "data_offset": 2048, 00:19:02.619 "data_size": 63488 00:19:02.619 } 00:19:02.619 ] 00:19:02.619 }' 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.619 05:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.881 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:02.881 05:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:02.881 [2024-11-20 05:29:34.667709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.823 "name": "raid_bdev1", 00:19:03.823 "uuid": "8aeca3f6-2e83-4253-8a2a-badb6fcc6df3", 00:19:03.823 "strip_size_kb": 64, 00:19:03.823 "state": "online", 00:19:03.823 "raid_level": "raid0", 00:19:03.823 "superblock": true, 00:19:03.823 "num_base_bdevs": 4, 00:19:03.823 "num_base_bdevs_discovered": 4, 00:19:03.823 "num_base_bdevs_operational": 4, 00:19:03.823 "base_bdevs_list": [ 00:19:03.823 { 00:19:03.823 "name": "BaseBdev1", 00:19:03.823 "uuid": "9525f8fc-7067-5984-9706-ed869550381a", 00:19:03.823 "is_configured": true, 00:19:03.823 "data_offset": 2048, 00:19:03.823 "data_size": 63488 00:19:03.823 }, 00:19:03.823 { 00:19:03.823 "name": "BaseBdev2", 00:19:03.823 "uuid": "10fc8667-4336-59ba-89b2-feeabf8713bd", 00:19:03.823 "is_configured": true, 00:19:03.823 "data_offset": 2048, 00:19:03.823 "data_size": 63488 00:19:03.823 }, 00:19:03.823 { 00:19:03.823 "name": "BaseBdev3", 00:19:03.823 "uuid": "7d1e5442-7c0b-5ae5-ad74-eae5c7c8ede6", 00:19:03.823 "is_configured": true, 00:19:03.823 "data_offset": 2048, 00:19:03.823 "data_size": 63488 00:19:03.823 }, 00:19:03.823 { 00:19:03.823 "name": "BaseBdev4", 00:19:03.823 "uuid": "bba2806a-ce78-5476-908c-43ffaa05a008", 00:19:03.823 "is_configured": true, 00:19:03.823 "data_offset": 2048, 00:19:03.823 "data_size": 63488 00:19:03.823 } 00:19:03.823 ] 00:19:03.823 }' 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.823 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.086 [2024-11-20 05:29:35.897991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.086 [2024-11-20 05:29:35.898198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.086 [2024-11-20 05:29:35.901387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.086 [2024-11-20 05:29:35.901536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.086 [2024-11-20 05:29:35.901593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.086 [2024-11-20 05:29:35.901606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:04.086 { 00:19:04.086 "results": [ 00:19:04.086 { 00:19:04.086 "job": "raid_bdev1", 00:19:04.086 "core_mask": "0x1", 00:19:04.086 "workload": "randrw", 00:19:04.086 "percentage": 50, 00:19:04.086 "status": "finished", 00:19:04.086 "queue_depth": 1, 00:19:04.086 "io_size": 131072, 00:19:04.086 "runtime": 1.228519, 00:19:04.086 "iops": 13920.826621322096, 00:19:04.086 "mibps": 1740.103327665262, 00:19:04.086 "io_failed": 1, 00:19:04.086 "io_timeout": 0, 00:19:04.086 "avg_latency_us": 98.88427887145306, 00:19:04.086 "min_latency_us": 33.28, 00:19:04.086 "max_latency_us": 1701.4153846153847 00:19:04.086 } 00:19:04.086 ], 00:19:04.086 "core_count": 1 00:19:04.086 } 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69252 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69252 ']' 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69252 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:04.086 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69252 00:19:04.348 killing process with pid 69252 00:19:04.348 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:04.348 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:04.348 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69252' 00:19:04.348 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69252 00:19:04.348 [2024-11-20 05:29:35.927050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:04.348 05:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69252 00:19:04.348 [2024-11-20 05:29:36.140232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YJqSSitBmk 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:19:05.291 00:19:05.291 real 0m3.847s 00:19:05.291 user 0m4.520s 00:19:05.291 sys 0m0.435s 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:05.291 05:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.291 ************************************ 00:19:05.291 END TEST raid_read_error_test 00:19:05.291 ************************************ 00:19:05.291 05:29:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:19:05.291 05:29:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:05.291 05:29:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:05.291 05:29:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.291 ************************************ 00:19:05.291 START TEST raid_write_error_test 00:19:05.291 ************************************ 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:05.291 05:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3tK1cD8q1G 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69392 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69392 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69392 ']' 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.291 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:05.291 [2024-11-20 05:29:37.068517] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:05.291 [2024-11-20 05:29:37.068799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69392 ] 00:19:05.553 [2024-11-20 05:29:37.332078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.816 [2024-11-20 05:29:37.471154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.816 [2024-11-20 05:29:37.623331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.816 [2024-11-20 05:29:37.623398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 BaseBdev1_malloc 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 true 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 [2024-11-20 05:29:37.962145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:06.386 [2024-11-20 05:29:37.962212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.386 [2024-11-20 05:29:37.962232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:06.386 [2024-11-20 05:29:37.962244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.386 [2024-11-20 05:29:37.964542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.386 [2024-11-20 05:29:37.964579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:06.386 BaseBdev1 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 BaseBdev2_malloc 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 true 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 [2024-11-20 05:29:38.008805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:06.386 [2024-11-20 05:29:38.008862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.386 [2024-11-20 05:29:38.008878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:06.386 [2024-11-20 05:29:38.008890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.386 [2024-11-20 05:29:38.011163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.386 [2024-11-20 05:29:38.011204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:06.386 BaseBdev2 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 BaseBdev3_malloc 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 true 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 [2024-11-20 05:29:38.065492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:06.386 [2024-11-20 05:29:38.065553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.386 [2024-11-20 05:29:38.065571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:06.386 [2024-11-20 05:29:38.065582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.386 [2024-11-20 05:29:38.067825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.386 [2024-11-20 05:29:38.068013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:06.386 BaseBdev3 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 BaseBdev4_malloc 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 true 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 [2024-11-20 05:29:38.111688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:06.386 [2024-11-20 05:29:38.111739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.386 [2024-11-20 05:29:38.111773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:06.386 [2024-11-20 05:29:38.111784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.386 [2024-11-20 05:29:38.114010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.386 [2024-11-20 05:29:38.114051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:06.386 BaseBdev4 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.386 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.386 [2024-11-20 05:29:38.119763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.387 [2024-11-20 05:29:38.121717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:06.387 [2024-11-20 05:29:38.121797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:06.387 [2024-11-20 05:29:38.121863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:06.387 [2024-11-20 05:29:38.122086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:06.387 [2024-11-20 05:29:38.122103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:06.387 [2024-11-20 05:29:38.122379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:06.387 [2024-11-20 05:29:38.122533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:06.387 [2024-11-20 05:29:38.122586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:06.387 [2024-11-20 05:29:38.122735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.387 "name": "raid_bdev1", 00:19:06.387 "uuid": "26f313a5-5dbb-4ad3-bf60-0ae441fa90d3", 00:19:06.387 "strip_size_kb": 64, 00:19:06.387 "state": "online", 00:19:06.387 "raid_level": "raid0", 00:19:06.387 "superblock": true, 00:19:06.387 "num_base_bdevs": 4, 00:19:06.387 "num_base_bdevs_discovered": 4, 00:19:06.387 "num_base_bdevs_operational": 4, 00:19:06.387 "base_bdevs_list": [ 00:19:06.387 { 00:19:06.387 "name": "BaseBdev1", 00:19:06.387 "uuid": "cd329a29-f1ce-5f0a-ab24-b793fe8456e7", 00:19:06.387 "is_configured": true, 00:19:06.387 "data_offset": 2048, 00:19:06.387 "data_size": 63488 00:19:06.387 }, 00:19:06.387 { 00:19:06.387 "name": "BaseBdev2", 00:19:06.387 "uuid": "e61ce8b8-d5b2-5d2b-97d2-35873bc7851f", 00:19:06.387 "is_configured": true, 00:19:06.387 "data_offset": 2048, 00:19:06.387 "data_size": 63488 00:19:06.387 }, 00:19:06.387 { 00:19:06.387 "name": "BaseBdev3", 00:19:06.387 "uuid": "a6077a34-79b4-5bce-beff-0aa52068f628", 00:19:06.387 "is_configured": true, 00:19:06.387 "data_offset": 2048, 00:19:06.387 "data_size": 63488 00:19:06.387 }, 00:19:06.387 { 00:19:06.387 "name": "BaseBdev4", 00:19:06.387 "uuid": "ea69774b-c5e9-53b0-a3f2-b2c247a29995", 00:19:06.387 "is_configured": true, 00:19:06.387 "data_offset": 2048, 00:19:06.387 "data_size": 63488 00:19:06.387 } 00:19:06.387 ] 00:19:06.387 }' 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.387 05:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.648 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:06.648 05:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:06.908 [2024-11-20 05:29:38.544907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.847 "name": "raid_bdev1", 00:19:07.847 "uuid": "26f313a5-5dbb-4ad3-bf60-0ae441fa90d3", 00:19:07.847 "strip_size_kb": 64, 00:19:07.847 "state": "online", 00:19:07.847 "raid_level": "raid0", 00:19:07.847 "superblock": true, 00:19:07.847 "num_base_bdevs": 4, 00:19:07.847 "num_base_bdevs_discovered": 4, 00:19:07.847 "num_base_bdevs_operational": 4, 00:19:07.847 "base_bdevs_list": [ 00:19:07.847 { 00:19:07.847 "name": "BaseBdev1", 00:19:07.847 "uuid": "cd329a29-f1ce-5f0a-ab24-b793fe8456e7", 00:19:07.847 "is_configured": true, 00:19:07.847 "data_offset": 2048, 00:19:07.847 "data_size": 63488 00:19:07.847 }, 00:19:07.847 { 00:19:07.847 "name": "BaseBdev2", 00:19:07.847 "uuid": "e61ce8b8-d5b2-5d2b-97d2-35873bc7851f", 00:19:07.847 "is_configured": true, 00:19:07.847 "data_offset": 2048, 00:19:07.847 "data_size": 63488 00:19:07.847 }, 00:19:07.847 { 00:19:07.847 "name": "BaseBdev3", 00:19:07.847 "uuid": "a6077a34-79b4-5bce-beff-0aa52068f628", 00:19:07.847 "is_configured": true, 00:19:07.847 "data_offset": 2048, 00:19:07.847 "data_size": 63488 00:19:07.847 }, 00:19:07.847 { 00:19:07.847 "name": "BaseBdev4", 00:19:07.847 "uuid": "ea69774b-c5e9-53b0-a3f2-b2c247a29995", 00:19:07.847 "is_configured": true, 00:19:07.847 "data_offset": 2048, 00:19:07.847 "data_size": 63488 00:19:07.847 } 00:19:07.847 ] 00:19:07.847 }' 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.847 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.106 [2024-11-20 05:29:39.823607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.106 [2024-11-20 05:29:39.823643] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.106 [2024-11-20 05:29:39.827091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.106 [2024-11-20 05:29:39.827173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.106 [2024-11-20 05:29:39.827223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.106 [2024-11-20 05:29:39.827234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:08.106 { 00:19:08.106 "results": [ 00:19:08.106 { 00:19:08.106 "job": "raid_bdev1", 00:19:08.106 "core_mask": "0x1", 00:19:08.106 "workload": "randrw", 00:19:08.106 "percentage": 50, 00:19:08.106 "status": "finished", 00:19:08.106 "queue_depth": 1, 00:19:08.106 "io_size": 131072, 00:19:08.106 "runtime": 1.276639, 00:19:08.106 "iops": 13885.679506892708, 00:19:08.106 "mibps": 1735.7099383615885, 00:19:08.106 "io_failed": 1, 00:19:08.106 "io_timeout": 0, 00:19:08.106 "avg_latency_us": 99.13262427103582, 00:19:08.106 "min_latency_us": 33.28, 00:19:08.106 "max_latency_us": 1714.0184615384615 00:19:08.106 } 00:19:08.106 ], 00:19:08.106 "core_count": 1 00:19:08.106 } 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69392 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69392 ']' 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69392 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69392 00:19:08.106 killing process with pid 69392 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69392' 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69392 00:19:08.106 05:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69392 00:19:08.107 [2024-11-20 05:29:39.855595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.367 [2024-11-20 05:29:40.075459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.310 05:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3tK1cD8q1G 00:19:09.310 05:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:09.310 05:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:09.310 05:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:19:09.310 05:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:19:09.310 05:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:09.310 05:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:09.310 05:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:19:09.310 00:19:09.310 real 0m4.012s 00:19:09.310 user 0m4.679s 00:19:09.310 sys 0m0.475s 00:19:09.310 05:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:09.310 05:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.310 ************************************ 00:19:09.310 END TEST raid_write_error_test 00:19:09.310 ************************************ 00:19:09.310 05:29:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:09.310 05:29:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:19:09.310 05:29:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:09.310 05:29:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:09.310 05:29:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.310 ************************************ 00:19:09.310 START TEST raid_state_function_test 00:19:09.310 ************************************ 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:09.310 Process raid pid: 69530 00:19:09.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69530 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69530' 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69530 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69530 ']' 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.310 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:09.572 [2024-11-20 05:29:41.145261] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:09.572 [2024-11-20 05:29:41.145431] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.572 [2024-11-20 05:29:41.301709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.832 [2024-11-20 05:29:41.425645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.832 [2024-11-20 05:29:41.577152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.832 [2024-11-20 05:29:41.577414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.409 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:10.409 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:19:10.409 05:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:10.409 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.409 05:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.409 [2024-11-20 05:29:41.999326] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:10.409 [2024-11-20 05:29:41.999415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:10.409 [2024-11-20 05:29:41.999427] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:10.409 [2024-11-20 05:29:41.999438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:10.409 [2024-11-20 05:29:41.999444] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:10.409 [2024-11-20 05:29:41.999454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:10.409 [2024-11-20 05:29:41.999460] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:10.409 [2024-11-20 05:29:41.999469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.409 "name": "Existed_Raid", 00:19:10.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.409 "strip_size_kb": 64, 00:19:10.409 "state": "configuring", 00:19:10.409 "raid_level": "concat", 00:19:10.409 "superblock": false, 00:19:10.409 "num_base_bdevs": 4, 00:19:10.409 "num_base_bdevs_discovered": 0, 00:19:10.409 "num_base_bdevs_operational": 4, 00:19:10.409 "base_bdevs_list": [ 00:19:10.409 { 00:19:10.409 "name": "BaseBdev1", 00:19:10.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.409 "is_configured": false, 00:19:10.409 "data_offset": 0, 00:19:10.409 "data_size": 0 00:19:10.409 }, 00:19:10.409 { 00:19:10.409 "name": "BaseBdev2", 00:19:10.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.409 "is_configured": false, 00:19:10.409 "data_offset": 0, 00:19:10.409 "data_size": 0 00:19:10.409 }, 00:19:10.409 { 00:19:10.409 "name": "BaseBdev3", 00:19:10.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.409 "is_configured": false, 00:19:10.409 "data_offset": 0, 00:19:10.409 "data_size": 0 00:19:10.409 }, 00:19:10.409 { 00:19:10.409 "name": "BaseBdev4", 00:19:10.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.409 "is_configured": false, 00:19:10.409 "data_offset": 0, 00:19:10.409 "data_size": 0 00:19:10.409 } 00:19:10.409 ] 00:19:10.409 }' 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.409 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 [2024-11-20 05:29:42.339347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:10.672 [2024-11-20 05:29:42.339407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 [2024-11-20 05:29:42.347333] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:10.672 [2024-11-20 05:29:42.347394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:10.672 [2024-11-20 05:29:42.347404] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:10.672 [2024-11-20 05:29:42.347414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:10.672 [2024-11-20 05:29:42.347420] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:10.672 [2024-11-20 05:29:42.347429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:10.672 [2024-11-20 05:29:42.347435] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:10.672 [2024-11-20 05:29:42.347444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 [2024-11-20 05:29:42.383105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.672 BaseBdev1 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.672 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.673 [ 00:19:10.673 { 00:19:10.673 "name": "BaseBdev1", 00:19:10.673 "aliases": [ 00:19:10.673 "9a4ac52d-867f-4281-8478-1edf818dfb6c" 00:19:10.673 ], 00:19:10.673 "product_name": "Malloc disk", 00:19:10.673 "block_size": 512, 00:19:10.673 "num_blocks": 65536, 00:19:10.673 "uuid": "9a4ac52d-867f-4281-8478-1edf818dfb6c", 00:19:10.673 "assigned_rate_limits": { 00:19:10.673 "rw_ios_per_sec": 0, 00:19:10.673 "rw_mbytes_per_sec": 0, 00:19:10.673 "r_mbytes_per_sec": 0, 00:19:10.673 "w_mbytes_per_sec": 0 00:19:10.673 }, 00:19:10.673 "claimed": true, 00:19:10.673 "claim_type": "exclusive_write", 00:19:10.673 "zoned": false, 00:19:10.673 "supported_io_types": { 00:19:10.673 "read": true, 00:19:10.673 "write": true, 00:19:10.673 "unmap": true, 00:19:10.673 "flush": true, 00:19:10.673 "reset": true, 00:19:10.673 "nvme_admin": false, 00:19:10.673 "nvme_io": false, 00:19:10.673 "nvme_io_md": false, 00:19:10.673 "write_zeroes": true, 00:19:10.673 "zcopy": true, 00:19:10.673 "get_zone_info": false, 00:19:10.673 "zone_management": false, 00:19:10.673 "zone_append": false, 00:19:10.673 "compare": false, 00:19:10.673 "compare_and_write": false, 00:19:10.673 "abort": true, 00:19:10.673 "seek_hole": false, 00:19:10.673 "seek_data": false, 00:19:10.673 "copy": true, 00:19:10.673 "nvme_iov_md": false 00:19:10.673 }, 00:19:10.673 "memory_domains": [ 00:19:10.673 { 00:19:10.673 "dma_device_id": "system", 00:19:10.673 "dma_device_type": 1 00:19:10.673 }, 00:19:10.673 { 00:19:10.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.673 "dma_device_type": 2 00:19:10.673 } 00:19:10.673 ], 00:19:10.673 "driver_specific": {} 00:19:10.673 } 00:19:10.673 ] 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.673 "name": "Existed_Raid", 00:19:10.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.673 "strip_size_kb": 64, 00:19:10.673 "state": "configuring", 00:19:10.673 "raid_level": "concat", 00:19:10.673 "superblock": false, 00:19:10.673 "num_base_bdevs": 4, 00:19:10.673 "num_base_bdevs_discovered": 1, 00:19:10.673 "num_base_bdevs_operational": 4, 00:19:10.673 "base_bdevs_list": [ 00:19:10.673 { 00:19:10.673 "name": "BaseBdev1", 00:19:10.673 "uuid": "9a4ac52d-867f-4281-8478-1edf818dfb6c", 00:19:10.673 "is_configured": true, 00:19:10.673 "data_offset": 0, 00:19:10.673 "data_size": 65536 00:19:10.673 }, 00:19:10.673 { 00:19:10.673 "name": "BaseBdev2", 00:19:10.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.673 "is_configured": false, 00:19:10.673 "data_offset": 0, 00:19:10.673 "data_size": 0 00:19:10.673 }, 00:19:10.673 { 00:19:10.673 "name": "BaseBdev3", 00:19:10.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.673 "is_configured": false, 00:19:10.673 "data_offset": 0, 00:19:10.673 "data_size": 0 00:19:10.673 }, 00:19:10.673 { 00:19:10.673 "name": "BaseBdev4", 00:19:10.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.673 "is_configured": false, 00:19:10.673 "data_offset": 0, 00:19:10.673 "data_size": 0 00:19:10.673 } 00:19:10.673 ] 00:19:10.673 }' 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.673 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.935 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:10.935 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.935 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.935 [2024-11-20 05:29:42.727445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:10.935 [2024-11-20 05:29:42.727755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:10.935 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.935 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:10.935 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.936 [2024-11-20 05:29:42.735627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.936 [2024-11-20 05:29:42.742551] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:10.936 [2024-11-20 05:29:42.742609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:10.936 [2024-11-20 05:29:42.742634] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:10.936 [2024-11-20 05:29:42.742653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:10.936 [2024-11-20 05:29:42.742666] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:10.936 [2024-11-20 05:29:42.742688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.936 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.197 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.197 "name": "Existed_Raid", 00:19:11.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.197 "strip_size_kb": 64, 00:19:11.197 "state": "configuring", 00:19:11.197 "raid_level": "concat", 00:19:11.197 "superblock": false, 00:19:11.197 "num_base_bdevs": 4, 00:19:11.197 "num_base_bdevs_discovered": 1, 00:19:11.197 "num_base_bdevs_operational": 4, 00:19:11.197 "base_bdevs_list": [ 00:19:11.197 { 00:19:11.197 "name": "BaseBdev1", 00:19:11.197 "uuid": "9a4ac52d-867f-4281-8478-1edf818dfb6c", 00:19:11.197 "is_configured": true, 00:19:11.197 "data_offset": 0, 00:19:11.197 "data_size": 65536 00:19:11.197 }, 00:19:11.197 { 00:19:11.197 "name": "BaseBdev2", 00:19:11.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.197 "is_configured": false, 00:19:11.197 "data_offset": 0, 00:19:11.197 "data_size": 0 00:19:11.197 }, 00:19:11.197 { 00:19:11.197 "name": "BaseBdev3", 00:19:11.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.197 "is_configured": false, 00:19:11.197 "data_offset": 0, 00:19:11.197 "data_size": 0 00:19:11.197 }, 00:19:11.197 { 00:19:11.197 "name": "BaseBdev4", 00:19:11.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.197 "is_configured": false, 00:19:11.197 "data_offset": 0, 00:19:11.197 "data_size": 0 00:19:11.197 } 00:19:11.197 ] 00:19:11.197 }' 00:19:11.197 05:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.197 05:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.458 [2024-11-20 05:29:43.082151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.458 BaseBdev2 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.458 [ 00:19:11.458 { 00:19:11.458 "name": "BaseBdev2", 00:19:11.458 "aliases": [ 00:19:11.458 "f8eda9fa-8814-4e7e-b712-64c74981c620" 00:19:11.458 ], 00:19:11.458 "product_name": "Malloc disk", 00:19:11.458 "block_size": 512, 00:19:11.458 "num_blocks": 65536, 00:19:11.458 "uuid": "f8eda9fa-8814-4e7e-b712-64c74981c620", 00:19:11.458 "assigned_rate_limits": { 00:19:11.458 "rw_ios_per_sec": 0, 00:19:11.458 "rw_mbytes_per_sec": 0, 00:19:11.458 "r_mbytes_per_sec": 0, 00:19:11.458 "w_mbytes_per_sec": 0 00:19:11.458 }, 00:19:11.458 "claimed": true, 00:19:11.458 "claim_type": "exclusive_write", 00:19:11.458 "zoned": false, 00:19:11.458 "supported_io_types": { 00:19:11.458 "read": true, 00:19:11.458 "write": true, 00:19:11.458 "unmap": true, 00:19:11.458 "flush": true, 00:19:11.458 "reset": true, 00:19:11.458 "nvme_admin": false, 00:19:11.458 "nvme_io": false, 00:19:11.458 "nvme_io_md": false, 00:19:11.458 "write_zeroes": true, 00:19:11.458 "zcopy": true, 00:19:11.458 "get_zone_info": false, 00:19:11.458 "zone_management": false, 00:19:11.458 "zone_append": false, 00:19:11.458 "compare": false, 00:19:11.458 "compare_and_write": false, 00:19:11.458 "abort": true, 00:19:11.458 "seek_hole": false, 00:19:11.458 "seek_data": false, 00:19:11.458 "copy": true, 00:19:11.458 "nvme_iov_md": false 00:19:11.458 }, 00:19:11.458 "memory_domains": [ 00:19:11.458 { 00:19:11.458 "dma_device_id": "system", 00:19:11.458 "dma_device_type": 1 00:19:11.458 }, 00:19:11.458 { 00:19:11.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.458 "dma_device_type": 2 00:19:11.458 } 00:19:11.458 ], 00:19:11.458 "driver_specific": {} 00:19:11.458 } 00:19:11.458 ] 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.458 "name": "Existed_Raid", 00:19:11.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.458 "strip_size_kb": 64, 00:19:11.458 "state": "configuring", 00:19:11.458 "raid_level": "concat", 00:19:11.458 "superblock": false, 00:19:11.458 "num_base_bdevs": 4, 00:19:11.458 "num_base_bdevs_discovered": 2, 00:19:11.458 "num_base_bdevs_operational": 4, 00:19:11.458 "base_bdevs_list": [ 00:19:11.458 { 00:19:11.458 "name": "BaseBdev1", 00:19:11.458 "uuid": "9a4ac52d-867f-4281-8478-1edf818dfb6c", 00:19:11.458 "is_configured": true, 00:19:11.458 "data_offset": 0, 00:19:11.458 "data_size": 65536 00:19:11.458 }, 00:19:11.458 { 00:19:11.458 "name": "BaseBdev2", 00:19:11.458 "uuid": "f8eda9fa-8814-4e7e-b712-64c74981c620", 00:19:11.458 "is_configured": true, 00:19:11.458 "data_offset": 0, 00:19:11.458 "data_size": 65536 00:19:11.458 }, 00:19:11.458 { 00:19:11.458 "name": "BaseBdev3", 00:19:11.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.458 "is_configured": false, 00:19:11.458 "data_offset": 0, 00:19:11.458 "data_size": 0 00:19:11.458 }, 00:19:11.458 { 00:19:11.458 "name": "BaseBdev4", 00:19:11.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.458 "is_configured": false, 00:19:11.458 "data_offset": 0, 00:19:11.458 "data_size": 0 00:19:11.458 } 00:19:11.458 ] 00:19:11.458 }' 00:19:11.458 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.459 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.719 [2024-11-20 05:29:43.455600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:11.719 BaseBdev3 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.719 [ 00:19:11.719 { 00:19:11.719 "name": "BaseBdev3", 00:19:11.719 "aliases": [ 00:19:11.719 "811429c9-b497-4cfc-98c4-91e4e3b45b58" 00:19:11.719 ], 00:19:11.719 "product_name": "Malloc disk", 00:19:11.719 "block_size": 512, 00:19:11.719 "num_blocks": 65536, 00:19:11.719 "uuid": "811429c9-b497-4cfc-98c4-91e4e3b45b58", 00:19:11.719 "assigned_rate_limits": { 00:19:11.719 "rw_ios_per_sec": 0, 00:19:11.719 "rw_mbytes_per_sec": 0, 00:19:11.719 "r_mbytes_per_sec": 0, 00:19:11.719 "w_mbytes_per_sec": 0 00:19:11.719 }, 00:19:11.719 "claimed": true, 00:19:11.719 "claim_type": "exclusive_write", 00:19:11.719 "zoned": false, 00:19:11.719 "supported_io_types": { 00:19:11.719 "read": true, 00:19:11.719 "write": true, 00:19:11.719 "unmap": true, 00:19:11.719 "flush": true, 00:19:11.719 "reset": true, 00:19:11.719 "nvme_admin": false, 00:19:11.719 "nvme_io": false, 00:19:11.719 "nvme_io_md": false, 00:19:11.719 "write_zeroes": true, 00:19:11.719 "zcopy": true, 00:19:11.719 "get_zone_info": false, 00:19:11.719 "zone_management": false, 00:19:11.719 "zone_append": false, 00:19:11.719 "compare": false, 00:19:11.719 "compare_and_write": false, 00:19:11.719 "abort": true, 00:19:11.719 "seek_hole": false, 00:19:11.719 "seek_data": false, 00:19:11.719 "copy": true, 00:19:11.719 "nvme_iov_md": false 00:19:11.719 }, 00:19:11.719 "memory_domains": [ 00:19:11.719 { 00:19:11.719 "dma_device_id": "system", 00:19:11.719 "dma_device_type": 1 00:19:11.719 }, 00:19:11.719 { 00:19:11.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.719 "dma_device_type": 2 00:19:11.719 } 00:19:11.719 ], 00:19:11.719 "driver_specific": {} 00:19:11.719 } 00:19:11.719 ] 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.719 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.719 "name": "Existed_Raid", 00:19:11.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.719 "strip_size_kb": 64, 00:19:11.719 "state": "configuring", 00:19:11.719 "raid_level": "concat", 00:19:11.719 "superblock": false, 00:19:11.719 "num_base_bdevs": 4, 00:19:11.719 "num_base_bdevs_discovered": 3, 00:19:11.719 "num_base_bdevs_operational": 4, 00:19:11.719 "base_bdevs_list": [ 00:19:11.719 { 00:19:11.719 "name": "BaseBdev1", 00:19:11.719 "uuid": "9a4ac52d-867f-4281-8478-1edf818dfb6c", 00:19:11.719 "is_configured": true, 00:19:11.719 "data_offset": 0, 00:19:11.719 "data_size": 65536 00:19:11.719 }, 00:19:11.720 { 00:19:11.720 "name": "BaseBdev2", 00:19:11.720 "uuid": "f8eda9fa-8814-4e7e-b712-64c74981c620", 00:19:11.720 "is_configured": true, 00:19:11.720 "data_offset": 0, 00:19:11.720 "data_size": 65536 00:19:11.720 }, 00:19:11.720 { 00:19:11.720 "name": "BaseBdev3", 00:19:11.720 "uuid": "811429c9-b497-4cfc-98c4-91e4e3b45b58", 00:19:11.720 "is_configured": true, 00:19:11.720 "data_offset": 0, 00:19:11.720 "data_size": 65536 00:19:11.720 }, 00:19:11.720 { 00:19:11.720 "name": "BaseBdev4", 00:19:11.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.720 "is_configured": false, 00:19:11.720 "data_offset": 0, 00:19:11.720 "data_size": 0 00:19:11.720 } 00:19:11.720 ] 00:19:11.720 }' 00:19:11.720 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.720 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.980 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:11.981 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.981 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.240 [2024-11-20 05:29:43.829092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:12.240 [2024-11-20 05:29:43.829148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:12.241 [2024-11-20 05:29:43.829157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:12.241 [2024-11-20 05:29:43.829483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:12.241 [2024-11-20 05:29:43.829720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:12.241 [2024-11-20 05:29:43.829740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:12.241 [2024-11-20 05:29:43.830043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.241 BaseBdev4 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.241 [ 00:19:12.241 { 00:19:12.241 "name": "BaseBdev4", 00:19:12.241 "aliases": [ 00:19:12.241 "610cb6c5-16ae-47b5-b618-0bd7b9696e61" 00:19:12.241 ], 00:19:12.241 "product_name": "Malloc disk", 00:19:12.241 "block_size": 512, 00:19:12.241 "num_blocks": 65536, 00:19:12.241 "uuid": "610cb6c5-16ae-47b5-b618-0bd7b9696e61", 00:19:12.241 "assigned_rate_limits": { 00:19:12.241 "rw_ios_per_sec": 0, 00:19:12.241 "rw_mbytes_per_sec": 0, 00:19:12.241 "r_mbytes_per_sec": 0, 00:19:12.241 "w_mbytes_per_sec": 0 00:19:12.241 }, 00:19:12.241 "claimed": true, 00:19:12.241 "claim_type": "exclusive_write", 00:19:12.241 "zoned": false, 00:19:12.241 "supported_io_types": { 00:19:12.241 "read": true, 00:19:12.241 "write": true, 00:19:12.241 "unmap": true, 00:19:12.241 "flush": true, 00:19:12.241 "reset": true, 00:19:12.241 "nvme_admin": false, 00:19:12.241 "nvme_io": false, 00:19:12.241 "nvme_io_md": false, 00:19:12.241 "write_zeroes": true, 00:19:12.241 "zcopy": true, 00:19:12.241 "get_zone_info": false, 00:19:12.241 "zone_management": false, 00:19:12.241 "zone_append": false, 00:19:12.241 "compare": false, 00:19:12.241 "compare_and_write": false, 00:19:12.241 "abort": true, 00:19:12.241 "seek_hole": false, 00:19:12.241 "seek_data": false, 00:19:12.241 "copy": true, 00:19:12.241 "nvme_iov_md": false 00:19:12.241 }, 00:19:12.241 "memory_domains": [ 00:19:12.241 { 00:19:12.241 "dma_device_id": "system", 00:19:12.241 "dma_device_type": 1 00:19:12.241 }, 00:19:12.241 { 00:19:12.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.241 "dma_device_type": 2 00:19:12.241 } 00:19:12.241 ], 00:19:12.241 "driver_specific": {} 00:19:12.241 } 00:19:12.241 ] 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.241 "name": "Existed_Raid", 00:19:12.241 "uuid": "abf6efbb-628a-4a37-82b1-0bbf450d0c2f", 00:19:12.241 "strip_size_kb": 64, 00:19:12.241 "state": "online", 00:19:12.241 "raid_level": "concat", 00:19:12.241 "superblock": false, 00:19:12.241 "num_base_bdevs": 4, 00:19:12.241 "num_base_bdevs_discovered": 4, 00:19:12.241 "num_base_bdevs_operational": 4, 00:19:12.241 "base_bdevs_list": [ 00:19:12.241 { 00:19:12.241 "name": "BaseBdev1", 00:19:12.241 "uuid": "9a4ac52d-867f-4281-8478-1edf818dfb6c", 00:19:12.241 "is_configured": true, 00:19:12.241 "data_offset": 0, 00:19:12.241 "data_size": 65536 00:19:12.241 }, 00:19:12.241 { 00:19:12.241 "name": "BaseBdev2", 00:19:12.241 "uuid": "f8eda9fa-8814-4e7e-b712-64c74981c620", 00:19:12.241 "is_configured": true, 00:19:12.241 "data_offset": 0, 00:19:12.241 "data_size": 65536 00:19:12.241 }, 00:19:12.241 { 00:19:12.241 "name": "BaseBdev3", 00:19:12.241 "uuid": "811429c9-b497-4cfc-98c4-91e4e3b45b58", 00:19:12.241 "is_configured": true, 00:19:12.241 "data_offset": 0, 00:19:12.241 "data_size": 65536 00:19:12.241 }, 00:19:12.241 { 00:19:12.241 "name": "BaseBdev4", 00:19:12.241 "uuid": "610cb6c5-16ae-47b5-b618-0bd7b9696e61", 00:19:12.241 "is_configured": true, 00:19:12.241 "data_offset": 0, 00:19:12.241 "data_size": 65536 00:19:12.241 } 00:19:12.241 ] 00:19:12.241 }' 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.241 05:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.503 [2024-11-20 05:29:44.233696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:12.503 "name": "Existed_Raid", 00:19:12.503 "aliases": [ 00:19:12.503 "abf6efbb-628a-4a37-82b1-0bbf450d0c2f" 00:19:12.503 ], 00:19:12.503 "product_name": "Raid Volume", 00:19:12.503 "block_size": 512, 00:19:12.503 "num_blocks": 262144, 00:19:12.503 "uuid": "abf6efbb-628a-4a37-82b1-0bbf450d0c2f", 00:19:12.503 "assigned_rate_limits": { 00:19:12.503 "rw_ios_per_sec": 0, 00:19:12.503 "rw_mbytes_per_sec": 0, 00:19:12.503 "r_mbytes_per_sec": 0, 00:19:12.503 "w_mbytes_per_sec": 0 00:19:12.503 }, 00:19:12.503 "claimed": false, 00:19:12.503 "zoned": false, 00:19:12.503 "supported_io_types": { 00:19:12.503 "read": true, 00:19:12.503 "write": true, 00:19:12.503 "unmap": true, 00:19:12.503 "flush": true, 00:19:12.503 "reset": true, 00:19:12.503 "nvme_admin": false, 00:19:12.503 "nvme_io": false, 00:19:12.503 "nvme_io_md": false, 00:19:12.503 "write_zeroes": true, 00:19:12.503 "zcopy": false, 00:19:12.503 "get_zone_info": false, 00:19:12.503 "zone_management": false, 00:19:12.503 "zone_append": false, 00:19:12.503 "compare": false, 00:19:12.503 "compare_and_write": false, 00:19:12.503 "abort": false, 00:19:12.503 "seek_hole": false, 00:19:12.503 "seek_data": false, 00:19:12.503 "copy": false, 00:19:12.503 "nvme_iov_md": false 00:19:12.503 }, 00:19:12.503 "memory_domains": [ 00:19:12.503 { 00:19:12.503 "dma_device_id": "system", 00:19:12.503 "dma_device_type": 1 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.503 "dma_device_type": 2 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "dma_device_id": "system", 00:19:12.503 "dma_device_type": 1 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.503 "dma_device_type": 2 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "dma_device_id": "system", 00:19:12.503 "dma_device_type": 1 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.503 "dma_device_type": 2 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "dma_device_id": "system", 00:19:12.503 "dma_device_type": 1 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.503 "dma_device_type": 2 00:19:12.503 } 00:19:12.503 ], 00:19:12.503 "driver_specific": { 00:19:12.503 "raid": { 00:19:12.503 "uuid": "abf6efbb-628a-4a37-82b1-0bbf450d0c2f", 00:19:12.503 "strip_size_kb": 64, 00:19:12.503 "state": "online", 00:19:12.503 "raid_level": "concat", 00:19:12.503 "superblock": false, 00:19:12.503 "num_base_bdevs": 4, 00:19:12.503 "num_base_bdevs_discovered": 4, 00:19:12.503 "num_base_bdevs_operational": 4, 00:19:12.503 "base_bdevs_list": [ 00:19:12.503 { 00:19:12.503 "name": "BaseBdev1", 00:19:12.503 "uuid": "9a4ac52d-867f-4281-8478-1edf818dfb6c", 00:19:12.503 "is_configured": true, 00:19:12.503 "data_offset": 0, 00:19:12.503 "data_size": 65536 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "name": "BaseBdev2", 00:19:12.503 "uuid": "f8eda9fa-8814-4e7e-b712-64c74981c620", 00:19:12.503 "is_configured": true, 00:19:12.503 "data_offset": 0, 00:19:12.503 "data_size": 65536 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "name": "BaseBdev3", 00:19:12.503 "uuid": "811429c9-b497-4cfc-98c4-91e4e3b45b58", 00:19:12.503 "is_configured": true, 00:19:12.503 "data_offset": 0, 00:19:12.503 "data_size": 65536 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "name": "BaseBdev4", 00:19:12.503 "uuid": "610cb6c5-16ae-47b5-b618-0bd7b9696e61", 00:19:12.503 "is_configured": true, 00:19:12.503 "data_offset": 0, 00:19:12.503 "data_size": 65536 00:19:12.503 } 00:19:12.503 ] 00:19:12.503 } 00:19:12.503 } 00:19:12.503 }' 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:12.503 BaseBdev2 00:19:12.503 BaseBdev3 00:19:12.503 BaseBdev4' 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.503 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.764 [2024-11-20 05:29:44.489378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:12.764 [2024-11-20 05:29:44.489418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.764 [2024-11-20 05:29:44.489477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.764 "name": "Existed_Raid", 00:19:12.764 "uuid": "abf6efbb-628a-4a37-82b1-0bbf450d0c2f", 00:19:12.764 "strip_size_kb": 64, 00:19:12.764 "state": "offline", 00:19:12.764 "raid_level": "concat", 00:19:12.764 "superblock": false, 00:19:12.764 "num_base_bdevs": 4, 00:19:12.764 "num_base_bdevs_discovered": 3, 00:19:12.764 "num_base_bdevs_operational": 3, 00:19:12.764 "base_bdevs_list": [ 00:19:12.764 { 00:19:12.764 "name": null, 00:19:12.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.764 "is_configured": false, 00:19:12.764 "data_offset": 0, 00:19:12.764 "data_size": 65536 00:19:12.764 }, 00:19:12.764 { 00:19:12.764 "name": "BaseBdev2", 00:19:12.764 "uuid": "f8eda9fa-8814-4e7e-b712-64c74981c620", 00:19:12.764 "is_configured": true, 00:19:12.764 "data_offset": 0, 00:19:12.764 "data_size": 65536 00:19:12.764 }, 00:19:12.764 { 00:19:12.764 "name": "BaseBdev3", 00:19:12.764 "uuid": "811429c9-b497-4cfc-98c4-91e4e3b45b58", 00:19:12.764 "is_configured": true, 00:19:12.764 "data_offset": 0, 00:19:12.764 "data_size": 65536 00:19:12.764 }, 00:19:12.764 { 00:19:12.764 "name": "BaseBdev4", 00:19:12.764 "uuid": "610cb6c5-16ae-47b5-b618-0bd7b9696e61", 00:19:12.764 "is_configured": true, 00:19:12.764 "data_offset": 0, 00:19:12.764 "data_size": 65536 00:19:12.764 } 00:19:12.764 ] 00:19:12.764 }' 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.764 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.336 [2024-11-20 05:29:44.904309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.336 05:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.336 [2024-11-20 05:29:45.008169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.336 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.336 [2024-11-20 05:29:45.111931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:13.336 [2024-11-20 05:29:45.112004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.630 BaseBdev2 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.630 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 [ 00:19:13.631 { 00:19:13.631 "name": "BaseBdev2", 00:19:13.631 "aliases": [ 00:19:13.631 "74629ae9-2c63-4010-8be4-189026c27513" 00:19:13.631 ], 00:19:13.631 "product_name": "Malloc disk", 00:19:13.631 "block_size": 512, 00:19:13.631 "num_blocks": 65536, 00:19:13.631 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:13.631 "assigned_rate_limits": { 00:19:13.631 "rw_ios_per_sec": 0, 00:19:13.631 "rw_mbytes_per_sec": 0, 00:19:13.631 "r_mbytes_per_sec": 0, 00:19:13.631 "w_mbytes_per_sec": 0 00:19:13.631 }, 00:19:13.631 "claimed": false, 00:19:13.631 "zoned": false, 00:19:13.631 "supported_io_types": { 00:19:13.631 "read": true, 00:19:13.631 "write": true, 00:19:13.631 "unmap": true, 00:19:13.631 "flush": true, 00:19:13.631 "reset": true, 00:19:13.631 "nvme_admin": false, 00:19:13.631 "nvme_io": false, 00:19:13.631 "nvme_io_md": false, 00:19:13.631 "write_zeroes": true, 00:19:13.631 "zcopy": true, 00:19:13.631 "get_zone_info": false, 00:19:13.631 "zone_management": false, 00:19:13.631 "zone_append": false, 00:19:13.631 "compare": false, 00:19:13.631 "compare_and_write": false, 00:19:13.631 "abort": true, 00:19:13.631 "seek_hole": false, 00:19:13.631 "seek_data": false, 00:19:13.631 "copy": true, 00:19:13.631 "nvme_iov_md": false 00:19:13.631 }, 00:19:13.631 "memory_domains": [ 00:19:13.631 { 00:19:13.631 "dma_device_id": "system", 00:19:13.631 "dma_device_type": 1 00:19:13.631 }, 00:19:13.631 { 00:19:13.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.631 "dma_device_type": 2 00:19:13.631 } 00:19:13.631 ], 00:19:13.631 "driver_specific": {} 00:19:13.631 } 00:19:13.631 ] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 BaseBdev3 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 [ 00:19:13.631 { 00:19:13.631 "name": "BaseBdev3", 00:19:13.631 "aliases": [ 00:19:13.631 "bf55ef0b-8599-484c-902c-f53425a79080" 00:19:13.631 ], 00:19:13.631 "product_name": "Malloc disk", 00:19:13.631 "block_size": 512, 00:19:13.631 "num_blocks": 65536, 00:19:13.631 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:13.631 "assigned_rate_limits": { 00:19:13.631 "rw_ios_per_sec": 0, 00:19:13.631 "rw_mbytes_per_sec": 0, 00:19:13.631 "r_mbytes_per_sec": 0, 00:19:13.631 "w_mbytes_per_sec": 0 00:19:13.631 }, 00:19:13.631 "claimed": false, 00:19:13.631 "zoned": false, 00:19:13.631 "supported_io_types": { 00:19:13.631 "read": true, 00:19:13.631 "write": true, 00:19:13.631 "unmap": true, 00:19:13.631 "flush": true, 00:19:13.631 "reset": true, 00:19:13.631 "nvme_admin": false, 00:19:13.631 "nvme_io": false, 00:19:13.631 "nvme_io_md": false, 00:19:13.631 "write_zeroes": true, 00:19:13.631 "zcopy": true, 00:19:13.631 "get_zone_info": false, 00:19:13.631 "zone_management": false, 00:19:13.631 "zone_append": false, 00:19:13.631 "compare": false, 00:19:13.631 "compare_and_write": false, 00:19:13.631 "abort": true, 00:19:13.631 "seek_hole": false, 00:19:13.631 "seek_data": false, 00:19:13.631 "copy": true, 00:19:13.631 "nvme_iov_md": false 00:19:13.631 }, 00:19:13.631 "memory_domains": [ 00:19:13.631 { 00:19:13.631 "dma_device_id": "system", 00:19:13.631 "dma_device_type": 1 00:19:13.631 }, 00:19:13.631 { 00:19:13.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.631 "dma_device_type": 2 00:19:13.631 } 00:19:13.631 ], 00:19:13.631 "driver_specific": {} 00:19:13.631 } 00:19:13.631 ] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 BaseBdev4 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 [ 00:19:13.631 { 00:19:13.631 "name": "BaseBdev4", 00:19:13.631 "aliases": [ 00:19:13.631 "9407fffc-9724-48bc-99f1-44fce4464e6c" 00:19:13.631 ], 00:19:13.631 "product_name": "Malloc disk", 00:19:13.631 "block_size": 512, 00:19:13.631 "num_blocks": 65536, 00:19:13.631 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:13.631 "assigned_rate_limits": { 00:19:13.631 "rw_ios_per_sec": 0, 00:19:13.631 "rw_mbytes_per_sec": 0, 00:19:13.631 "r_mbytes_per_sec": 0, 00:19:13.631 "w_mbytes_per_sec": 0 00:19:13.631 }, 00:19:13.631 "claimed": false, 00:19:13.631 "zoned": false, 00:19:13.631 "supported_io_types": { 00:19:13.631 "read": true, 00:19:13.631 "write": true, 00:19:13.631 "unmap": true, 00:19:13.631 "flush": true, 00:19:13.631 "reset": true, 00:19:13.631 "nvme_admin": false, 00:19:13.631 "nvme_io": false, 00:19:13.631 "nvme_io_md": false, 00:19:13.631 "write_zeroes": true, 00:19:13.631 "zcopy": true, 00:19:13.631 "get_zone_info": false, 00:19:13.631 "zone_management": false, 00:19:13.631 "zone_append": false, 00:19:13.631 "compare": false, 00:19:13.631 "compare_and_write": false, 00:19:13.631 "abort": true, 00:19:13.631 "seek_hole": false, 00:19:13.631 "seek_data": false, 00:19:13.631 "copy": true, 00:19:13.631 "nvme_iov_md": false 00:19:13.631 }, 00:19:13.632 "memory_domains": [ 00:19:13.632 { 00:19:13.632 "dma_device_id": "system", 00:19:13.632 "dma_device_type": 1 00:19:13.632 }, 00:19:13.632 { 00:19:13.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.632 "dma_device_type": 2 00:19:13.632 } 00:19:13.632 ], 00:19:13.632 "driver_specific": {} 00:19:13.632 } 00:19:13.632 ] 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.632 [2024-11-20 05:29:45.384643] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:13.632 [2024-11-20 05:29:45.384704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:13.632 [2024-11-20 05:29:45.384728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.632 [2024-11-20 05:29:45.386740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.632 [2024-11-20 05:29:45.386948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.632 "name": "Existed_Raid", 00:19:13.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.632 "strip_size_kb": 64, 00:19:13.632 "state": "configuring", 00:19:13.632 "raid_level": "concat", 00:19:13.632 "superblock": false, 00:19:13.632 "num_base_bdevs": 4, 00:19:13.632 "num_base_bdevs_discovered": 3, 00:19:13.632 "num_base_bdevs_operational": 4, 00:19:13.632 "base_bdevs_list": [ 00:19:13.632 { 00:19:13.632 "name": "BaseBdev1", 00:19:13.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.632 "is_configured": false, 00:19:13.632 "data_offset": 0, 00:19:13.632 "data_size": 0 00:19:13.632 }, 00:19:13.632 { 00:19:13.632 "name": "BaseBdev2", 00:19:13.632 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:13.632 "is_configured": true, 00:19:13.632 "data_offset": 0, 00:19:13.632 "data_size": 65536 00:19:13.632 }, 00:19:13.632 { 00:19:13.632 "name": "BaseBdev3", 00:19:13.632 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:13.632 "is_configured": true, 00:19:13.632 "data_offset": 0, 00:19:13.632 "data_size": 65536 00:19:13.632 }, 00:19:13.632 { 00:19:13.632 "name": "BaseBdev4", 00:19:13.632 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:13.632 "is_configured": true, 00:19:13.632 "data_offset": 0, 00:19:13.632 "data_size": 65536 00:19:13.632 } 00:19:13.632 ] 00:19:13.632 }' 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.632 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.902 [2024-11-20 05:29:45.688743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.902 "name": "Existed_Raid", 00:19:13.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.902 "strip_size_kb": 64, 00:19:13.902 "state": "configuring", 00:19:13.902 "raid_level": "concat", 00:19:13.902 "superblock": false, 00:19:13.902 "num_base_bdevs": 4, 00:19:13.902 "num_base_bdevs_discovered": 2, 00:19:13.902 "num_base_bdevs_operational": 4, 00:19:13.902 "base_bdevs_list": [ 00:19:13.902 { 00:19:13.902 "name": "BaseBdev1", 00:19:13.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.902 "is_configured": false, 00:19:13.902 "data_offset": 0, 00:19:13.902 "data_size": 0 00:19:13.902 }, 00:19:13.902 { 00:19:13.902 "name": null, 00:19:13.902 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:13.902 "is_configured": false, 00:19:13.902 "data_offset": 0, 00:19:13.902 "data_size": 65536 00:19:13.902 }, 00:19:13.902 { 00:19:13.902 "name": "BaseBdev3", 00:19:13.902 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:13.902 "is_configured": true, 00:19:13.902 "data_offset": 0, 00:19:13.902 "data_size": 65536 00:19:13.902 }, 00:19:13.902 { 00:19:13.902 "name": "BaseBdev4", 00:19:13.902 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:13.902 "is_configured": true, 00:19:13.902 "data_offset": 0, 00:19:13.902 "data_size": 65536 00:19:13.902 } 00:19:13.902 ] 00:19:13.902 }' 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.902 05:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.472 [2024-11-20 05:29:46.089744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.472 BaseBdev1 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.472 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.472 [ 00:19:14.472 { 00:19:14.472 "name": "BaseBdev1", 00:19:14.472 "aliases": [ 00:19:14.472 "5eddec2b-d46d-421d-b1c5-d61549f71746" 00:19:14.472 ], 00:19:14.472 "product_name": "Malloc disk", 00:19:14.472 "block_size": 512, 00:19:14.472 "num_blocks": 65536, 00:19:14.472 "uuid": "5eddec2b-d46d-421d-b1c5-d61549f71746", 00:19:14.472 "assigned_rate_limits": { 00:19:14.472 "rw_ios_per_sec": 0, 00:19:14.472 "rw_mbytes_per_sec": 0, 00:19:14.472 "r_mbytes_per_sec": 0, 00:19:14.472 "w_mbytes_per_sec": 0 00:19:14.472 }, 00:19:14.472 "claimed": true, 00:19:14.472 "claim_type": "exclusive_write", 00:19:14.472 "zoned": false, 00:19:14.472 "supported_io_types": { 00:19:14.472 "read": true, 00:19:14.472 "write": true, 00:19:14.472 "unmap": true, 00:19:14.472 "flush": true, 00:19:14.472 "reset": true, 00:19:14.472 "nvme_admin": false, 00:19:14.472 "nvme_io": false, 00:19:14.472 "nvme_io_md": false, 00:19:14.472 "write_zeroes": true, 00:19:14.472 "zcopy": true, 00:19:14.472 "get_zone_info": false, 00:19:14.472 "zone_management": false, 00:19:14.472 "zone_append": false, 00:19:14.472 "compare": false, 00:19:14.472 "compare_and_write": false, 00:19:14.473 "abort": true, 00:19:14.473 "seek_hole": false, 00:19:14.473 "seek_data": false, 00:19:14.473 "copy": true, 00:19:14.473 "nvme_iov_md": false 00:19:14.473 }, 00:19:14.473 "memory_domains": [ 00:19:14.473 { 00:19:14.473 "dma_device_id": "system", 00:19:14.473 "dma_device_type": 1 00:19:14.473 }, 00:19:14.473 { 00:19:14.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.473 "dma_device_type": 2 00:19:14.473 } 00:19:14.473 ], 00:19:14.473 "driver_specific": {} 00:19:14.473 } 00:19:14.473 ] 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.473 "name": "Existed_Raid", 00:19:14.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.473 "strip_size_kb": 64, 00:19:14.473 "state": "configuring", 00:19:14.473 "raid_level": "concat", 00:19:14.473 "superblock": false, 00:19:14.473 "num_base_bdevs": 4, 00:19:14.473 "num_base_bdevs_discovered": 3, 00:19:14.473 "num_base_bdevs_operational": 4, 00:19:14.473 "base_bdevs_list": [ 00:19:14.473 { 00:19:14.473 "name": "BaseBdev1", 00:19:14.473 "uuid": "5eddec2b-d46d-421d-b1c5-d61549f71746", 00:19:14.473 "is_configured": true, 00:19:14.473 "data_offset": 0, 00:19:14.473 "data_size": 65536 00:19:14.473 }, 00:19:14.473 { 00:19:14.473 "name": null, 00:19:14.473 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:14.473 "is_configured": false, 00:19:14.473 "data_offset": 0, 00:19:14.473 "data_size": 65536 00:19:14.473 }, 00:19:14.473 { 00:19:14.473 "name": "BaseBdev3", 00:19:14.473 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:14.473 "is_configured": true, 00:19:14.473 "data_offset": 0, 00:19:14.473 "data_size": 65536 00:19:14.473 }, 00:19:14.473 { 00:19:14.473 "name": "BaseBdev4", 00:19:14.473 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:14.473 "is_configured": true, 00:19:14.473 "data_offset": 0, 00:19:14.473 "data_size": 65536 00:19:14.473 } 00:19:14.473 ] 00:19:14.473 }' 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.473 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.731 [2024-11-20 05:29:46.469900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.731 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.732 "name": "Existed_Raid", 00:19:14.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.732 "strip_size_kb": 64, 00:19:14.732 "state": "configuring", 00:19:14.732 "raid_level": "concat", 00:19:14.732 "superblock": false, 00:19:14.732 "num_base_bdevs": 4, 00:19:14.732 "num_base_bdevs_discovered": 2, 00:19:14.732 "num_base_bdevs_operational": 4, 00:19:14.732 "base_bdevs_list": [ 00:19:14.732 { 00:19:14.732 "name": "BaseBdev1", 00:19:14.732 "uuid": "5eddec2b-d46d-421d-b1c5-d61549f71746", 00:19:14.732 "is_configured": true, 00:19:14.732 "data_offset": 0, 00:19:14.732 "data_size": 65536 00:19:14.732 }, 00:19:14.732 { 00:19:14.732 "name": null, 00:19:14.732 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:14.732 "is_configured": false, 00:19:14.732 "data_offset": 0, 00:19:14.732 "data_size": 65536 00:19:14.732 }, 00:19:14.732 { 00:19:14.732 "name": null, 00:19:14.732 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:14.732 "is_configured": false, 00:19:14.732 "data_offset": 0, 00:19:14.732 "data_size": 65536 00:19:14.732 }, 00:19:14.732 { 00:19:14.732 "name": "BaseBdev4", 00:19:14.732 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:14.732 "is_configured": true, 00:19:14.732 "data_offset": 0, 00:19:14.732 "data_size": 65536 00:19:14.732 } 00:19:14.732 ] 00:19:14.732 }' 00:19:14.732 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.732 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.990 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.990 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.990 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.990 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:14.990 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.990 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:14.990 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:14.990 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.990 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.990 [2024-11-20 05:29:46.817959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.249 "name": "Existed_Raid", 00:19:15.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.249 "strip_size_kb": 64, 00:19:15.249 "state": "configuring", 00:19:15.249 "raid_level": "concat", 00:19:15.249 "superblock": false, 00:19:15.249 "num_base_bdevs": 4, 00:19:15.249 "num_base_bdevs_discovered": 3, 00:19:15.249 "num_base_bdevs_operational": 4, 00:19:15.249 "base_bdevs_list": [ 00:19:15.249 { 00:19:15.249 "name": "BaseBdev1", 00:19:15.249 "uuid": "5eddec2b-d46d-421d-b1c5-d61549f71746", 00:19:15.249 "is_configured": true, 00:19:15.249 "data_offset": 0, 00:19:15.249 "data_size": 65536 00:19:15.249 }, 00:19:15.249 { 00:19:15.249 "name": null, 00:19:15.249 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:15.249 "is_configured": false, 00:19:15.249 "data_offset": 0, 00:19:15.249 "data_size": 65536 00:19:15.249 }, 00:19:15.249 { 00:19:15.249 "name": "BaseBdev3", 00:19:15.249 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:15.249 "is_configured": true, 00:19:15.249 "data_offset": 0, 00:19:15.249 "data_size": 65536 00:19:15.249 }, 00:19:15.249 { 00:19:15.249 "name": "BaseBdev4", 00:19:15.249 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:15.249 "is_configured": true, 00:19:15.249 "data_offset": 0, 00:19:15.249 "data_size": 65536 00:19:15.249 } 00:19:15.249 ] 00:19:15.249 }' 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.249 05:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.509 [2024-11-20 05:29:47.170064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.509 "name": "Existed_Raid", 00:19:15.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.509 "strip_size_kb": 64, 00:19:15.509 "state": "configuring", 00:19:15.509 "raid_level": "concat", 00:19:15.509 "superblock": false, 00:19:15.509 "num_base_bdevs": 4, 00:19:15.509 "num_base_bdevs_discovered": 2, 00:19:15.509 "num_base_bdevs_operational": 4, 00:19:15.509 "base_bdevs_list": [ 00:19:15.509 { 00:19:15.509 "name": null, 00:19:15.509 "uuid": "5eddec2b-d46d-421d-b1c5-d61549f71746", 00:19:15.509 "is_configured": false, 00:19:15.509 "data_offset": 0, 00:19:15.509 "data_size": 65536 00:19:15.509 }, 00:19:15.509 { 00:19:15.509 "name": null, 00:19:15.509 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:15.509 "is_configured": false, 00:19:15.509 "data_offset": 0, 00:19:15.509 "data_size": 65536 00:19:15.509 }, 00:19:15.509 { 00:19:15.509 "name": "BaseBdev3", 00:19:15.509 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:15.509 "is_configured": true, 00:19:15.509 "data_offset": 0, 00:19:15.509 "data_size": 65536 00:19:15.509 }, 00:19:15.509 { 00:19:15.509 "name": "BaseBdev4", 00:19:15.509 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:15.509 "is_configured": true, 00:19:15.509 "data_offset": 0, 00:19:15.509 "data_size": 65536 00:19:15.509 } 00:19:15.509 ] 00:19:15.509 }' 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.509 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.767 [2024-11-20 05:29:47.555983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.767 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.768 "name": "Existed_Raid", 00:19:15.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.768 "strip_size_kb": 64, 00:19:15.768 "state": "configuring", 00:19:15.768 "raid_level": "concat", 00:19:15.768 "superblock": false, 00:19:15.768 "num_base_bdevs": 4, 00:19:15.768 "num_base_bdevs_discovered": 3, 00:19:15.768 "num_base_bdevs_operational": 4, 00:19:15.768 "base_bdevs_list": [ 00:19:15.768 { 00:19:15.768 "name": null, 00:19:15.768 "uuid": "5eddec2b-d46d-421d-b1c5-d61549f71746", 00:19:15.768 "is_configured": false, 00:19:15.768 "data_offset": 0, 00:19:15.768 "data_size": 65536 00:19:15.768 }, 00:19:15.768 { 00:19:15.768 "name": "BaseBdev2", 00:19:15.768 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:15.768 "is_configured": true, 00:19:15.768 "data_offset": 0, 00:19:15.768 "data_size": 65536 00:19:15.768 }, 00:19:15.768 { 00:19:15.768 "name": "BaseBdev3", 00:19:15.768 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:15.768 "is_configured": true, 00:19:15.768 "data_offset": 0, 00:19:15.768 "data_size": 65536 00:19:15.768 }, 00:19:15.768 { 00:19:15.768 "name": "BaseBdev4", 00:19:15.768 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:15.768 "is_configured": true, 00:19:15.768 "data_offset": 0, 00:19:15.768 "data_size": 65536 00:19:15.768 } 00:19:15.768 ] 00:19:15.768 }' 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.768 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5eddec2b-d46d-421d-b1c5-d61549f71746 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.333 [2024-11-20 05:29:47.948317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:16.333 [2024-11-20 05:29:47.948361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:16.333 [2024-11-20 05:29:47.948389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:16.333 [2024-11-20 05:29:47.948646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:16.333 [2024-11-20 05:29:47.948760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:16.333 [2024-11-20 05:29:47.948768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:16.333 [2024-11-20 05:29:47.948959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.333 NewBaseBdev 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.333 [ 00:19:16.333 { 00:19:16.333 "name": "NewBaseBdev", 00:19:16.333 "aliases": [ 00:19:16.333 "5eddec2b-d46d-421d-b1c5-d61549f71746" 00:19:16.333 ], 00:19:16.333 "product_name": "Malloc disk", 00:19:16.333 "block_size": 512, 00:19:16.333 "num_blocks": 65536, 00:19:16.333 "uuid": "5eddec2b-d46d-421d-b1c5-d61549f71746", 00:19:16.333 "assigned_rate_limits": { 00:19:16.333 "rw_ios_per_sec": 0, 00:19:16.333 "rw_mbytes_per_sec": 0, 00:19:16.333 "r_mbytes_per_sec": 0, 00:19:16.333 "w_mbytes_per_sec": 0 00:19:16.333 }, 00:19:16.333 "claimed": true, 00:19:16.333 "claim_type": "exclusive_write", 00:19:16.333 "zoned": false, 00:19:16.333 "supported_io_types": { 00:19:16.333 "read": true, 00:19:16.333 "write": true, 00:19:16.333 "unmap": true, 00:19:16.333 "flush": true, 00:19:16.333 "reset": true, 00:19:16.333 "nvme_admin": false, 00:19:16.333 "nvme_io": false, 00:19:16.333 "nvme_io_md": false, 00:19:16.333 "write_zeroes": true, 00:19:16.333 "zcopy": true, 00:19:16.333 "get_zone_info": false, 00:19:16.333 "zone_management": false, 00:19:16.333 "zone_append": false, 00:19:16.333 "compare": false, 00:19:16.333 "compare_and_write": false, 00:19:16.333 "abort": true, 00:19:16.333 "seek_hole": false, 00:19:16.333 "seek_data": false, 00:19:16.333 "copy": true, 00:19:16.333 "nvme_iov_md": false 00:19:16.333 }, 00:19:16.333 "memory_domains": [ 00:19:16.333 { 00:19:16.333 "dma_device_id": "system", 00:19:16.333 "dma_device_type": 1 00:19:16.333 }, 00:19:16.333 { 00:19:16.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.333 "dma_device_type": 2 00:19:16.333 } 00:19:16.333 ], 00:19:16.333 "driver_specific": {} 00:19:16.333 } 00:19:16.333 ] 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.333 05:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.333 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.333 "name": "Existed_Raid", 00:19:16.333 "uuid": "864eec12-2c97-4b89-952c-e82edd299e4c", 00:19:16.333 "strip_size_kb": 64, 00:19:16.333 "state": "online", 00:19:16.333 "raid_level": "concat", 00:19:16.333 "superblock": false, 00:19:16.333 "num_base_bdevs": 4, 00:19:16.333 "num_base_bdevs_discovered": 4, 00:19:16.333 "num_base_bdevs_operational": 4, 00:19:16.333 "base_bdevs_list": [ 00:19:16.333 { 00:19:16.333 "name": "NewBaseBdev", 00:19:16.333 "uuid": "5eddec2b-d46d-421d-b1c5-d61549f71746", 00:19:16.333 "is_configured": true, 00:19:16.333 "data_offset": 0, 00:19:16.333 "data_size": 65536 00:19:16.333 }, 00:19:16.333 { 00:19:16.333 "name": "BaseBdev2", 00:19:16.333 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:16.333 "is_configured": true, 00:19:16.333 "data_offset": 0, 00:19:16.333 "data_size": 65536 00:19:16.333 }, 00:19:16.333 { 00:19:16.333 "name": "BaseBdev3", 00:19:16.334 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:16.334 "is_configured": true, 00:19:16.334 "data_offset": 0, 00:19:16.334 "data_size": 65536 00:19:16.334 }, 00:19:16.334 { 00:19:16.334 "name": "BaseBdev4", 00:19:16.334 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:16.334 "is_configured": true, 00:19:16.334 "data_offset": 0, 00:19:16.334 "data_size": 65536 00:19:16.334 } 00:19:16.334 ] 00:19:16.334 }' 00:19:16.334 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.334 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:16.592 [2024-11-20 05:29:48.300770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:16.592 "name": "Existed_Raid", 00:19:16.592 "aliases": [ 00:19:16.592 "864eec12-2c97-4b89-952c-e82edd299e4c" 00:19:16.592 ], 00:19:16.592 "product_name": "Raid Volume", 00:19:16.592 "block_size": 512, 00:19:16.592 "num_blocks": 262144, 00:19:16.592 "uuid": "864eec12-2c97-4b89-952c-e82edd299e4c", 00:19:16.592 "assigned_rate_limits": { 00:19:16.592 "rw_ios_per_sec": 0, 00:19:16.592 "rw_mbytes_per_sec": 0, 00:19:16.592 "r_mbytes_per_sec": 0, 00:19:16.592 "w_mbytes_per_sec": 0 00:19:16.592 }, 00:19:16.592 "claimed": false, 00:19:16.592 "zoned": false, 00:19:16.592 "supported_io_types": { 00:19:16.592 "read": true, 00:19:16.592 "write": true, 00:19:16.592 "unmap": true, 00:19:16.592 "flush": true, 00:19:16.592 "reset": true, 00:19:16.592 "nvme_admin": false, 00:19:16.592 "nvme_io": false, 00:19:16.592 "nvme_io_md": false, 00:19:16.592 "write_zeroes": true, 00:19:16.592 "zcopy": false, 00:19:16.592 "get_zone_info": false, 00:19:16.592 "zone_management": false, 00:19:16.592 "zone_append": false, 00:19:16.592 "compare": false, 00:19:16.592 "compare_and_write": false, 00:19:16.592 "abort": false, 00:19:16.592 "seek_hole": false, 00:19:16.592 "seek_data": false, 00:19:16.592 "copy": false, 00:19:16.592 "nvme_iov_md": false 00:19:16.592 }, 00:19:16.592 "memory_domains": [ 00:19:16.592 { 00:19:16.592 "dma_device_id": "system", 00:19:16.592 "dma_device_type": 1 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.592 "dma_device_type": 2 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "dma_device_id": "system", 00:19:16.592 "dma_device_type": 1 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.592 "dma_device_type": 2 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "dma_device_id": "system", 00:19:16.592 "dma_device_type": 1 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.592 "dma_device_type": 2 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "dma_device_id": "system", 00:19:16.592 "dma_device_type": 1 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.592 "dma_device_type": 2 00:19:16.592 } 00:19:16.592 ], 00:19:16.592 "driver_specific": { 00:19:16.592 "raid": { 00:19:16.592 "uuid": "864eec12-2c97-4b89-952c-e82edd299e4c", 00:19:16.592 "strip_size_kb": 64, 00:19:16.592 "state": "online", 00:19:16.592 "raid_level": "concat", 00:19:16.592 "superblock": false, 00:19:16.592 "num_base_bdevs": 4, 00:19:16.592 "num_base_bdevs_discovered": 4, 00:19:16.592 "num_base_bdevs_operational": 4, 00:19:16.592 "base_bdevs_list": [ 00:19:16.592 { 00:19:16.592 "name": "NewBaseBdev", 00:19:16.592 "uuid": "5eddec2b-d46d-421d-b1c5-d61549f71746", 00:19:16.592 "is_configured": true, 00:19:16.592 "data_offset": 0, 00:19:16.592 "data_size": 65536 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "name": "BaseBdev2", 00:19:16.592 "uuid": "74629ae9-2c63-4010-8be4-189026c27513", 00:19:16.592 "is_configured": true, 00:19:16.592 "data_offset": 0, 00:19:16.592 "data_size": 65536 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "name": "BaseBdev3", 00:19:16.592 "uuid": "bf55ef0b-8599-484c-902c-f53425a79080", 00:19:16.592 "is_configured": true, 00:19:16.592 "data_offset": 0, 00:19:16.592 "data_size": 65536 00:19:16.592 }, 00:19:16.592 { 00:19:16.592 "name": "BaseBdev4", 00:19:16.592 "uuid": "9407fffc-9724-48bc-99f1-44fce4464e6c", 00:19:16.592 "is_configured": true, 00:19:16.592 "data_offset": 0, 00:19:16.592 "data_size": 65536 00:19:16.592 } 00:19:16.592 ] 00:19:16.592 } 00:19:16.592 } 00:19:16.592 }' 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:16.592 BaseBdev2 00:19:16.592 BaseBdev3 00:19:16.592 BaseBdev4' 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.592 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.593 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.593 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.593 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.593 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.593 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.593 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:16.593 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.593 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.850 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.850 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.850 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.850 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.850 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.851 [2024-11-20 05:29:48.516477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:16.851 [2024-11-20 05:29:48.516508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.851 [2024-11-20 05:29:48.516582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.851 [2024-11-20 05:29:48.516648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.851 [2024-11-20 05:29:48.516657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69530 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69530 ']' 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69530 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69530 00:19:16.851 killing process with pid 69530 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69530' 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69530 00:19:16.851 [2024-11-20 05:29:48.545605] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.851 05:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69530 00:19:17.176 [2024-11-20 05:29:48.748549] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:17.755 00:19:17.755 real 0m8.276s 00:19:17.755 user 0m13.177s 00:19:17.755 sys 0m1.420s 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.755 ************************************ 00:19:17.755 END TEST raid_state_function_test 00:19:17.755 ************************************ 00:19:17.755 05:29:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:17.755 05:29:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:17.755 05:29:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:17.755 05:29:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.755 ************************************ 00:19:17.755 START TEST raid_state_function_test_sb 00:19:17.755 ************************************ 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:17.755 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:17.756 Process raid pid: 70164 00:19:17.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70164 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70164' 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70164 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70164 ']' 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:17.756 05:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.756 [2024-11-20 05:29:49.461487] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:17.756 [2024-11-20 05:29:49.461619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.016 [2024-11-20 05:29:49.616098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.016 [2024-11-20 05:29:49.735151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.275 [2024-11-20 05:29:49.886342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.275 [2024-11-20 05:29:49.886396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.536 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.537 [2024-11-20 05:29:50.322399] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:18.537 [2024-11-20 05:29:50.322463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:18.537 [2024-11-20 05:29:50.322474] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:18.537 [2024-11-20 05:29:50.322484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:18.537 [2024-11-20 05:29:50.322490] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:18.537 [2024-11-20 05:29:50.322499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:18.537 [2024-11-20 05:29:50.322505] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:18.537 [2024-11-20 05:29:50.322514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.537 "name": "Existed_Raid", 00:19:18.537 "uuid": "42387612-3872-4c24-9eb2-06c9cc5b47c3", 00:19:18.537 "strip_size_kb": 64, 00:19:18.537 "state": "configuring", 00:19:18.537 "raid_level": "concat", 00:19:18.537 "superblock": true, 00:19:18.537 "num_base_bdevs": 4, 00:19:18.537 "num_base_bdevs_discovered": 0, 00:19:18.537 "num_base_bdevs_operational": 4, 00:19:18.537 "base_bdevs_list": [ 00:19:18.537 { 00:19:18.537 "name": "BaseBdev1", 00:19:18.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.537 "is_configured": false, 00:19:18.537 "data_offset": 0, 00:19:18.537 "data_size": 0 00:19:18.537 }, 00:19:18.537 { 00:19:18.537 "name": "BaseBdev2", 00:19:18.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.537 "is_configured": false, 00:19:18.537 "data_offset": 0, 00:19:18.537 "data_size": 0 00:19:18.537 }, 00:19:18.537 { 00:19:18.537 "name": "BaseBdev3", 00:19:18.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.537 "is_configured": false, 00:19:18.537 "data_offset": 0, 00:19:18.537 "data_size": 0 00:19:18.537 }, 00:19:18.537 { 00:19:18.537 "name": "BaseBdev4", 00:19:18.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.537 "is_configured": false, 00:19:18.537 "data_offset": 0, 00:19:18.537 "data_size": 0 00:19:18.537 } 00:19:18.537 ] 00:19:18.537 }' 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.537 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.109 [2024-11-20 05:29:50.638416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:19.109 [2024-11-20 05:29:50.638462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.109 [2024-11-20 05:29:50.646421] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:19.109 [2024-11-20 05:29:50.646463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:19.109 [2024-11-20 05:29:50.646473] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:19.109 [2024-11-20 05:29:50.646482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:19.109 [2024-11-20 05:29:50.646489] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:19.109 [2024-11-20 05:29:50.646498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:19.109 [2024-11-20 05:29:50.646505] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:19.109 [2024-11-20 05:29:50.646513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.109 [2024-11-20 05:29:50.681275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:19.109 BaseBdev1 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:19.109 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.110 [ 00:19:19.110 { 00:19:19.110 "name": "BaseBdev1", 00:19:19.110 "aliases": [ 00:19:19.110 "680f733d-7715-4cb2-8123-333d596ee17c" 00:19:19.110 ], 00:19:19.110 "product_name": "Malloc disk", 00:19:19.110 "block_size": 512, 00:19:19.110 "num_blocks": 65536, 00:19:19.110 "uuid": "680f733d-7715-4cb2-8123-333d596ee17c", 00:19:19.110 "assigned_rate_limits": { 00:19:19.110 "rw_ios_per_sec": 0, 00:19:19.110 "rw_mbytes_per_sec": 0, 00:19:19.110 "r_mbytes_per_sec": 0, 00:19:19.110 "w_mbytes_per_sec": 0 00:19:19.110 }, 00:19:19.110 "claimed": true, 00:19:19.110 "claim_type": "exclusive_write", 00:19:19.110 "zoned": false, 00:19:19.110 "supported_io_types": { 00:19:19.110 "read": true, 00:19:19.110 "write": true, 00:19:19.110 "unmap": true, 00:19:19.110 "flush": true, 00:19:19.110 "reset": true, 00:19:19.110 "nvme_admin": false, 00:19:19.110 "nvme_io": false, 00:19:19.110 "nvme_io_md": false, 00:19:19.110 "write_zeroes": true, 00:19:19.110 "zcopy": true, 00:19:19.110 "get_zone_info": false, 00:19:19.110 "zone_management": false, 00:19:19.110 "zone_append": false, 00:19:19.110 "compare": false, 00:19:19.110 "compare_and_write": false, 00:19:19.110 "abort": true, 00:19:19.110 "seek_hole": false, 00:19:19.110 "seek_data": false, 00:19:19.110 "copy": true, 00:19:19.110 "nvme_iov_md": false 00:19:19.110 }, 00:19:19.110 "memory_domains": [ 00:19:19.110 { 00:19:19.110 "dma_device_id": "system", 00:19:19.110 "dma_device_type": 1 00:19:19.110 }, 00:19:19.110 { 00:19:19.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.110 "dma_device_type": 2 00:19:19.110 } 00:19:19.110 ], 00:19:19.110 "driver_specific": {} 00:19:19.110 } 00:19:19.110 ] 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.110 "name": "Existed_Raid", 00:19:19.110 "uuid": "c44ef356-4882-418c-84e8-f98bcdac51a9", 00:19:19.110 "strip_size_kb": 64, 00:19:19.110 "state": "configuring", 00:19:19.110 "raid_level": "concat", 00:19:19.110 "superblock": true, 00:19:19.110 "num_base_bdevs": 4, 00:19:19.110 "num_base_bdevs_discovered": 1, 00:19:19.110 "num_base_bdevs_operational": 4, 00:19:19.110 "base_bdevs_list": [ 00:19:19.110 { 00:19:19.110 "name": "BaseBdev1", 00:19:19.110 "uuid": "680f733d-7715-4cb2-8123-333d596ee17c", 00:19:19.110 "is_configured": true, 00:19:19.110 "data_offset": 2048, 00:19:19.110 "data_size": 63488 00:19:19.110 }, 00:19:19.110 { 00:19:19.110 "name": "BaseBdev2", 00:19:19.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.110 "is_configured": false, 00:19:19.110 "data_offset": 0, 00:19:19.110 "data_size": 0 00:19:19.110 }, 00:19:19.110 { 00:19:19.110 "name": "BaseBdev3", 00:19:19.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.110 "is_configured": false, 00:19:19.110 "data_offset": 0, 00:19:19.110 "data_size": 0 00:19:19.110 }, 00:19:19.110 { 00:19:19.110 "name": "BaseBdev4", 00:19:19.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.110 "is_configured": false, 00:19:19.110 "data_offset": 0, 00:19:19.110 "data_size": 0 00:19:19.110 } 00:19:19.110 ] 00:19:19.110 }' 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.110 05:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.371 [2024-11-20 05:29:51.045421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:19.371 [2024-11-20 05:29:51.045480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.371 [2024-11-20 05:29:51.053485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:19.371 [2024-11-20 05:29:51.055471] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:19.371 [2024-11-20 05:29:51.055517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:19.371 [2024-11-20 05:29:51.055528] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:19.371 [2024-11-20 05:29:51.055539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:19.371 [2024-11-20 05:29:51.055545] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:19.371 [2024-11-20 05:29:51.055554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.371 "name": "Existed_Raid", 00:19:19.371 "uuid": "85f9aec5-4cc3-436f-a623-083500d7bf70", 00:19:19.371 "strip_size_kb": 64, 00:19:19.371 "state": "configuring", 00:19:19.371 "raid_level": "concat", 00:19:19.371 "superblock": true, 00:19:19.371 "num_base_bdevs": 4, 00:19:19.371 "num_base_bdevs_discovered": 1, 00:19:19.371 "num_base_bdevs_operational": 4, 00:19:19.371 "base_bdevs_list": [ 00:19:19.371 { 00:19:19.371 "name": "BaseBdev1", 00:19:19.371 "uuid": "680f733d-7715-4cb2-8123-333d596ee17c", 00:19:19.371 "is_configured": true, 00:19:19.371 "data_offset": 2048, 00:19:19.371 "data_size": 63488 00:19:19.371 }, 00:19:19.371 { 00:19:19.371 "name": "BaseBdev2", 00:19:19.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.371 "is_configured": false, 00:19:19.371 "data_offset": 0, 00:19:19.371 "data_size": 0 00:19:19.371 }, 00:19:19.371 { 00:19:19.371 "name": "BaseBdev3", 00:19:19.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.371 "is_configured": false, 00:19:19.371 "data_offset": 0, 00:19:19.371 "data_size": 0 00:19:19.371 }, 00:19:19.371 { 00:19:19.371 "name": "BaseBdev4", 00:19:19.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.371 "is_configured": false, 00:19:19.371 "data_offset": 0, 00:19:19.371 "data_size": 0 00:19:19.371 } 00:19:19.371 ] 00:19:19.371 }' 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.371 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.632 [2024-11-20 05:29:51.402231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:19.632 BaseBdev2 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.632 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.632 [ 00:19:19.632 { 00:19:19.632 "name": "BaseBdev2", 00:19:19.632 "aliases": [ 00:19:19.632 "6ad0a453-657e-4a4c-b1ad-6cdf4dc4a176" 00:19:19.632 ], 00:19:19.632 "product_name": "Malloc disk", 00:19:19.632 "block_size": 512, 00:19:19.632 "num_blocks": 65536, 00:19:19.633 "uuid": "6ad0a453-657e-4a4c-b1ad-6cdf4dc4a176", 00:19:19.633 "assigned_rate_limits": { 00:19:19.633 "rw_ios_per_sec": 0, 00:19:19.633 "rw_mbytes_per_sec": 0, 00:19:19.633 "r_mbytes_per_sec": 0, 00:19:19.633 "w_mbytes_per_sec": 0 00:19:19.633 }, 00:19:19.633 "claimed": true, 00:19:19.633 "claim_type": "exclusive_write", 00:19:19.633 "zoned": false, 00:19:19.633 "supported_io_types": { 00:19:19.633 "read": true, 00:19:19.633 "write": true, 00:19:19.633 "unmap": true, 00:19:19.633 "flush": true, 00:19:19.633 "reset": true, 00:19:19.633 "nvme_admin": false, 00:19:19.633 "nvme_io": false, 00:19:19.633 "nvme_io_md": false, 00:19:19.633 "write_zeroes": true, 00:19:19.633 "zcopy": true, 00:19:19.633 "get_zone_info": false, 00:19:19.633 "zone_management": false, 00:19:19.633 "zone_append": false, 00:19:19.633 "compare": false, 00:19:19.633 "compare_and_write": false, 00:19:19.633 "abort": true, 00:19:19.633 "seek_hole": false, 00:19:19.633 "seek_data": false, 00:19:19.633 "copy": true, 00:19:19.633 "nvme_iov_md": false 00:19:19.633 }, 00:19:19.633 "memory_domains": [ 00:19:19.633 { 00:19:19.633 "dma_device_id": "system", 00:19:19.633 "dma_device_type": 1 00:19:19.633 }, 00:19:19.633 { 00:19:19.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.633 "dma_device_type": 2 00:19:19.633 } 00:19:19.633 ], 00:19:19.633 "driver_specific": {} 00:19:19.633 } 00:19:19.633 ] 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.633 "name": "Existed_Raid", 00:19:19.633 "uuid": "85f9aec5-4cc3-436f-a623-083500d7bf70", 00:19:19.633 "strip_size_kb": 64, 00:19:19.633 "state": "configuring", 00:19:19.633 "raid_level": "concat", 00:19:19.633 "superblock": true, 00:19:19.633 "num_base_bdevs": 4, 00:19:19.633 "num_base_bdevs_discovered": 2, 00:19:19.633 "num_base_bdevs_operational": 4, 00:19:19.633 "base_bdevs_list": [ 00:19:19.633 { 00:19:19.633 "name": "BaseBdev1", 00:19:19.633 "uuid": "680f733d-7715-4cb2-8123-333d596ee17c", 00:19:19.633 "is_configured": true, 00:19:19.633 "data_offset": 2048, 00:19:19.633 "data_size": 63488 00:19:19.633 }, 00:19:19.633 { 00:19:19.633 "name": "BaseBdev2", 00:19:19.633 "uuid": "6ad0a453-657e-4a4c-b1ad-6cdf4dc4a176", 00:19:19.633 "is_configured": true, 00:19:19.633 "data_offset": 2048, 00:19:19.633 "data_size": 63488 00:19:19.633 }, 00:19:19.633 { 00:19:19.633 "name": "BaseBdev3", 00:19:19.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.633 "is_configured": false, 00:19:19.633 "data_offset": 0, 00:19:19.633 "data_size": 0 00:19:19.633 }, 00:19:19.633 { 00:19:19.633 "name": "BaseBdev4", 00:19:19.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.633 "is_configured": false, 00:19:19.633 "data_offset": 0, 00:19:19.633 "data_size": 0 00:19:19.633 } 00:19:19.633 ] 00:19:19.633 }' 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.633 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.204 [2024-11-20 05:29:51.828719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.204 BaseBdev3 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.204 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.204 [ 00:19:20.204 { 00:19:20.204 "name": "BaseBdev3", 00:19:20.204 "aliases": [ 00:19:20.204 "5d0d065d-63d8-4e79-87ae-3b8294c3565d" 00:19:20.205 ], 00:19:20.205 "product_name": "Malloc disk", 00:19:20.205 "block_size": 512, 00:19:20.205 "num_blocks": 65536, 00:19:20.205 "uuid": "5d0d065d-63d8-4e79-87ae-3b8294c3565d", 00:19:20.205 "assigned_rate_limits": { 00:19:20.205 "rw_ios_per_sec": 0, 00:19:20.205 "rw_mbytes_per_sec": 0, 00:19:20.205 "r_mbytes_per_sec": 0, 00:19:20.205 "w_mbytes_per_sec": 0 00:19:20.205 }, 00:19:20.205 "claimed": true, 00:19:20.205 "claim_type": "exclusive_write", 00:19:20.205 "zoned": false, 00:19:20.205 "supported_io_types": { 00:19:20.205 "read": true, 00:19:20.205 "write": true, 00:19:20.205 "unmap": true, 00:19:20.205 "flush": true, 00:19:20.205 "reset": true, 00:19:20.205 "nvme_admin": false, 00:19:20.205 "nvme_io": false, 00:19:20.205 "nvme_io_md": false, 00:19:20.205 "write_zeroes": true, 00:19:20.205 "zcopy": true, 00:19:20.205 "get_zone_info": false, 00:19:20.205 "zone_management": false, 00:19:20.205 "zone_append": false, 00:19:20.205 "compare": false, 00:19:20.205 "compare_and_write": false, 00:19:20.205 "abort": true, 00:19:20.205 "seek_hole": false, 00:19:20.205 "seek_data": false, 00:19:20.205 "copy": true, 00:19:20.205 "nvme_iov_md": false 00:19:20.205 }, 00:19:20.205 "memory_domains": [ 00:19:20.205 { 00:19:20.205 "dma_device_id": "system", 00:19:20.205 "dma_device_type": 1 00:19:20.205 }, 00:19:20.205 { 00:19:20.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.205 "dma_device_type": 2 00:19:20.205 } 00:19:20.205 ], 00:19:20.205 "driver_specific": {} 00:19:20.205 } 00:19:20.205 ] 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.205 "name": "Existed_Raid", 00:19:20.205 "uuid": "85f9aec5-4cc3-436f-a623-083500d7bf70", 00:19:20.205 "strip_size_kb": 64, 00:19:20.205 "state": "configuring", 00:19:20.205 "raid_level": "concat", 00:19:20.205 "superblock": true, 00:19:20.205 "num_base_bdevs": 4, 00:19:20.205 "num_base_bdevs_discovered": 3, 00:19:20.205 "num_base_bdevs_operational": 4, 00:19:20.205 "base_bdevs_list": [ 00:19:20.205 { 00:19:20.205 "name": "BaseBdev1", 00:19:20.205 "uuid": "680f733d-7715-4cb2-8123-333d596ee17c", 00:19:20.205 "is_configured": true, 00:19:20.205 "data_offset": 2048, 00:19:20.205 "data_size": 63488 00:19:20.205 }, 00:19:20.205 { 00:19:20.205 "name": "BaseBdev2", 00:19:20.205 "uuid": "6ad0a453-657e-4a4c-b1ad-6cdf4dc4a176", 00:19:20.205 "is_configured": true, 00:19:20.205 "data_offset": 2048, 00:19:20.205 "data_size": 63488 00:19:20.205 }, 00:19:20.205 { 00:19:20.205 "name": "BaseBdev3", 00:19:20.205 "uuid": "5d0d065d-63d8-4e79-87ae-3b8294c3565d", 00:19:20.205 "is_configured": true, 00:19:20.205 "data_offset": 2048, 00:19:20.205 "data_size": 63488 00:19:20.205 }, 00:19:20.205 { 00:19:20.205 "name": "BaseBdev4", 00:19:20.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.205 "is_configured": false, 00:19:20.205 "data_offset": 0, 00:19:20.205 "data_size": 0 00:19:20.205 } 00:19:20.205 ] 00:19:20.205 }' 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.205 05:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.467 [2024-11-20 05:29:52.218069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:20.467 [2024-11-20 05:29:52.218323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:20.467 [2024-11-20 05:29:52.218338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:20.467 BaseBdev4 00:19:20.467 [2024-11-20 05:29:52.218638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:20.467 [2024-11-20 05:29:52.218797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:20.467 [2024-11-20 05:29:52.218814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:20.467 [2024-11-20 05:29:52.218943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.467 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.467 [ 00:19:20.467 { 00:19:20.467 "name": "BaseBdev4", 00:19:20.467 "aliases": [ 00:19:20.467 "59941e6a-d409-4ef6-b7d3-aa1f18138d98" 00:19:20.467 ], 00:19:20.467 "product_name": "Malloc disk", 00:19:20.467 "block_size": 512, 00:19:20.467 "num_blocks": 65536, 00:19:20.467 "uuid": "59941e6a-d409-4ef6-b7d3-aa1f18138d98", 00:19:20.467 "assigned_rate_limits": { 00:19:20.467 "rw_ios_per_sec": 0, 00:19:20.467 "rw_mbytes_per_sec": 0, 00:19:20.467 "r_mbytes_per_sec": 0, 00:19:20.467 "w_mbytes_per_sec": 0 00:19:20.467 }, 00:19:20.467 "claimed": true, 00:19:20.467 "claim_type": "exclusive_write", 00:19:20.467 "zoned": false, 00:19:20.467 "supported_io_types": { 00:19:20.467 "read": true, 00:19:20.467 "write": true, 00:19:20.467 "unmap": true, 00:19:20.467 "flush": true, 00:19:20.467 "reset": true, 00:19:20.467 "nvme_admin": false, 00:19:20.467 "nvme_io": false, 00:19:20.467 "nvme_io_md": false, 00:19:20.467 "write_zeroes": true, 00:19:20.467 "zcopy": true, 00:19:20.467 "get_zone_info": false, 00:19:20.467 "zone_management": false, 00:19:20.467 "zone_append": false, 00:19:20.467 "compare": false, 00:19:20.467 "compare_and_write": false, 00:19:20.468 "abort": true, 00:19:20.468 "seek_hole": false, 00:19:20.468 "seek_data": false, 00:19:20.468 "copy": true, 00:19:20.468 "nvme_iov_md": false 00:19:20.468 }, 00:19:20.468 "memory_domains": [ 00:19:20.468 { 00:19:20.468 "dma_device_id": "system", 00:19:20.468 "dma_device_type": 1 00:19:20.468 }, 00:19:20.468 { 00:19:20.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.468 "dma_device_type": 2 00:19:20.468 } 00:19:20.468 ], 00:19:20.468 "driver_specific": {} 00:19:20.468 } 00:19:20.468 ] 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.468 "name": "Existed_Raid", 00:19:20.468 "uuid": "85f9aec5-4cc3-436f-a623-083500d7bf70", 00:19:20.468 "strip_size_kb": 64, 00:19:20.468 "state": "online", 00:19:20.468 "raid_level": "concat", 00:19:20.468 "superblock": true, 00:19:20.468 "num_base_bdevs": 4, 00:19:20.468 "num_base_bdevs_discovered": 4, 00:19:20.468 "num_base_bdevs_operational": 4, 00:19:20.468 "base_bdevs_list": [ 00:19:20.468 { 00:19:20.468 "name": "BaseBdev1", 00:19:20.468 "uuid": "680f733d-7715-4cb2-8123-333d596ee17c", 00:19:20.468 "is_configured": true, 00:19:20.468 "data_offset": 2048, 00:19:20.468 "data_size": 63488 00:19:20.468 }, 00:19:20.468 { 00:19:20.468 "name": "BaseBdev2", 00:19:20.468 "uuid": "6ad0a453-657e-4a4c-b1ad-6cdf4dc4a176", 00:19:20.468 "is_configured": true, 00:19:20.468 "data_offset": 2048, 00:19:20.468 "data_size": 63488 00:19:20.468 }, 00:19:20.468 { 00:19:20.468 "name": "BaseBdev3", 00:19:20.468 "uuid": "5d0d065d-63d8-4e79-87ae-3b8294c3565d", 00:19:20.468 "is_configured": true, 00:19:20.468 "data_offset": 2048, 00:19:20.468 "data_size": 63488 00:19:20.468 }, 00:19:20.468 { 00:19:20.468 "name": "BaseBdev4", 00:19:20.468 "uuid": "59941e6a-d409-4ef6-b7d3-aa1f18138d98", 00:19:20.468 "is_configured": true, 00:19:20.468 "data_offset": 2048, 00:19:20.468 "data_size": 63488 00:19:20.468 } 00:19:20.468 ] 00:19:20.468 }' 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.468 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.039 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:21.039 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.040 [2024-11-20 05:29:52.574624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.040 "name": "Existed_Raid", 00:19:21.040 "aliases": [ 00:19:21.040 "85f9aec5-4cc3-436f-a623-083500d7bf70" 00:19:21.040 ], 00:19:21.040 "product_name": "Raid Volume", 00:19:21.040 "block_size": 512, 00:19:21.040 "num_blocks": 253952, 00:19:21.040 "uuid": "85f9aec5-4cc3-436f-a623-083500d7bf70", 00:19:21.040 "assigned_rate_limits": { 00:19:21.040 "rw_ios_per_sec": 0, 00:19:21.040 "rw_mbytes_per_sec": 0, 00:19:21.040 "r_mbytes_per_sec": 0, 00:19:21.040 "w_mbytes_per_sec": 0 00:19:21.040 }, 00:19:21.040 "claimed": false, 00:19:21.040 "zoned": false, 00:19:21.040 "supported_io_types": { 00:19:21.040 "read": true, 00:19:21.040 "write": true, 00:19:21.040 "unmap": true, 00:19:21.040 "flush": true, 00:19:21.040 "reset": true, 00:19:21.040 "nvme_admin": false, 00:19:21.040 "nvme_io": false, 00:19:21.040 "nvme_io_md": false, 00:19:21.040 "write_zeroes": true, 00:19:21.040 "zcopy": false, 00:19:21.040 "get_zone_info": false, 00:19:21.040 "zone_management": false, 00:19:21.040 "zone_append": false, 00:19:21.040 "compare": false, 00:19:21.040 "compare_and_write": false, 00:19:21.040 "abort": false, 00:19:21.040 "seek_hole": false, 00:19:21.040 "seek_data": false, 00:19:21.040 "copy": false, 00:19:21.040 "nvme_iov_md": false 00:19:21.040 }, 00:19:21.040 "memory_domains": [ 00:19:21.040 { 00:19:21.040 "dma_device_id": "system", 00:19:21.040 "dma_device_type": 1 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.040 "dma_device_type": 2 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "dma_device_id": "system", 00:19:21.040 "dma_device_type": 1 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.040 "dma_device_type": 2 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "dma_device_id": "system", 00:19:21.040 "dma_device_type": 1 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.040 "dma_device_type": 2 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "dma_device_id": "system", 00:19:21.040 "dma_device_type": 1 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.040 "dma_device_type": 2 00:19:21.040 } 00:19:21.040 ], 00:19:21.040 "driver_specific": { 00:19:21.040 "raid": { 00:19:21.040 "uuid": "85f9aec5-4cc3-436f-a623-083500d7bf70", 00:19:21.040 "strip_size_kb": 64, 00:19:21.040 "state": "online", 00:19:21.040 "raid_level": "concat", 00:19:21.040 "superblock": true, 00:19:21.040 "num_base_bdevs": 4, 00:19:21.040 "num_base_bdevs_discovered": 4, 00:19:21.040 "num_base_bdevs_operational": 4, 00:19:21.040 "base_bdevs_list": [ 00:19:21.040 { 00:19:21.040 "name": "BaseBdev1", 00:19:21.040 "uuid": "680f733d-7715-4cb2-8123-333d596ee17c", 00:19:21.040 "is_configured": true, 00:19:21.040 "data_offset": 2048, 00:19:21.040 "data_size": 63488 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "name": "BaseBdev2", 00:19:21.040 "uuid": "6ad0a453-657e-4a4c-b1ad-6cdf4dc4a176", 00:19:21.040 "is_configured": true, 00:19:21.040 "data_offset": 2048, 00:19:21.040 "data_size": 63488 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "name": "BaseBdev3", 00:19:21.040 "uuid": "5d0d065d-63d8-4e79-87ae-3b8294c3565d", 00:19:21.040 "is_configured": true, 00:19:21.040 "data_offset": 2048, 00:19:21.040 "data_size": 63488 00:19:21.040 }, 00:19:21.040 { 00:19:21.040 "name": "BaseBdev4", 00:19:21.040 "uuid": "59941e6a-d409-4ef6-b7d3-aa1f18138d98", 00:19:21.040 "is_configured": true, 00:19:21.040 "data_offset": 2048, 00:19:21.040 "data_size": 63488 00:19:21.040 } 00:19:21.040 ] 00:19:21.040 } 00:19:21.040 } 00:19:21.040 }' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:21.040 BaseBdev2 00:19:21.040 BaseBdev3 00:19:21.040 BaseBdev4' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.040 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.041 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.041 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:21.041 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.041 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.041 [2024-11-20 05:29:52.814311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.041 [2024-11-20 05:29:52.814451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.041 [2024-11-20 05:29:52.814520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.300 "name": "Existed_Raid", 00:19:21.300 "uuid": "85f9aec5-4cc3-436f-a623-083500d7bf70", 00:19:21.300 "strip_size_kb": 64, 00:19:21.300 "state": "offline", 00:19:21.300 "raid_level": "concat", 00:19:21.300 "superblock": true, 00:19:21.300 "num_base_bdevs": 4, 00:19:21.300 "num_base_bdevs_discovered": 3, 00:19:21.300 "num_base_bdevs_operational": 3, 00:19:21.300 "base_bdevs_list": [ 00:19:21.300 { 00:19:21.300 "name": null, 00:19:21.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.300 "is_configured": false, 00:19:21.300 "data_offset": 0, 00:19:21.300 "data_size": 63488 00:19:21.300 }, 00:19:21.300 { 00:19:21.300 "name": "BaseBdev2", 00:19:21.300 "uuid": "6ad0a453-657e-4a4c-b1ad-6cdf4dc4a176", 00:19:21.300 "is_configured": true, 00:19:21.300 "data_offset": 2048, 00:19:21.300 "data_size": 63488 00:19:21.300 }, 00:19:21.300 { 00:19:21.300 "name": "BaseBdev3", 00:19:21.300 "uuid": "5d0d065d-63d8-4e79-87ae-3b8294c3565d", 00:19:21.300 "is_configured": true, 00:19:21.300 "data_offset": 2048, 00:19:21.300 "data_size": 63488 00:19:21.300 }, 00:19:21.300 { 00:19:21.300 "name": "BaseBdev4", 00:19:21.300 "uuid": "59941e6a-d409-4ef6-b7d3-aa1f18138d98", 00:19:21.300 "is_configured": true, 00:19:21.300 "data_offset": 2048, 00:19:21.300 "data_size": 63488 00:19:21.300 } 00:19:21.300 ] 00:19:21.300 }' 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.300 05:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.561 [2024-11-20 05:29:53.260552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:21.561 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:21.562 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.562 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.562 [2024-11-20 05:29:53.366443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.821 [2024-11-20 05:29:53.468574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:21.821 [2024-11-20 05:29:53.468762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.821 BaseBdev2 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.821 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.821 [ 00:19:21.821 { 00:19:21.821 "name": "BaseBdev2", 00:19:21.821 "aliases": [ 00:19:21.821 "df51d1ae-4712-4859-ac15-adcfcb5f7800" 00:19:21.821 ], 00:19:21.821 "product_name": "Malloc disk", 00:19:21.821 "block_size": 512, 00:19:21.821 "num_blocks": 65536, 00:19:21.821 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:21.821 "assigned_rate_limits": { 00:19:21.821 "rw_ios_per_sec": 0, 00:19:21.821 "rw_mbytes_per_sec": 0, 00:19:21.822 "r_mbytes_per_sec": 0, 00:19:21.822 "w_mbytes_per_sec": 0 00:19:21.822 }, 00:19:21.822 "claimed": false, 00:19:21.822 "zoned": false, 00:19:21.822 "supported_io_types": { 00:19:21.822 "read": true, 00:19:21.822 "write": true, 00:19:21.822 "unmap": true, 00:19:21.822 "flush": true, 00:19:21.822 "reset": true, 00:19:21.822 "nvme_admin": false, 00:19:21.822 "nvme_io": false, 00:19:21.822 "nvme_io_md": false, 00:19:21.822 "write_zeroes": true, 00:19:21.822 "zcopy": true, 00:19:21.822 "get_zone_info": false, 00:19:21.822 "zone_management": false, 00:19:21.822 "zone_append": false, 00:19:21.822 "compare": false, 00:19:21.822 "compare_and_write": false, 00:19:21.822 "abort": true, 00:19:21.822 "seek_hole": false, 00:19:21.822 "seek_data": false, 00:19:21.822 "copy": true, 00:19:21.822 "nvme_iov_md": false 00:19:21.822 }, 00:19:21.822 "memory_domains": [ 00:19:21.822 { 00:19:21.822 "dma_device_id": "system", 00:19:21.822 "dma_device_type": 1 00:19:21.822 }, 00:19:21.822 { 00:19:21.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.822 "dma_device_type": 2 00:19:21.822 } 00:19:21.822 ], 00:19:21.822 "driver_specific": {} 00:19:21.822 } 00:19:21.822 ] 00:19:21.822 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.822 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:21.822 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:21.822 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:21.822 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:21.822 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.822 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.080 BaseBdev3 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.081 [ 00:19:22.081 { 00:19:22.081 "name": "BaseBdev3", 00:19:22.081 "aliases": [ 00:19:22.081 "7c6864e7-7f03-4149-8862-e572d75e2bd3" 00:19:22.081 ], 00:19:22.081 "product_name": "Malloc disk", 00:19:22.081 "block_size": 512, 00:19:22.081 "num_blocks": 65536, 00:19:22.081 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:22.081 "assigned_rate_limits": { 00:19:22.081 "rw_ios_per_sec": 0, 00:19:22.081 "rw_mbytes_per_sec": 0, 00:19:22.081 "r_mbytes_per_sec": 0, 00:19:22.081 "w_mbytes_per_sec": 0 00:19:22.081 }, 00:19:22.081 "claimed": false, 00:19:22.081 "zoned": false, 00:19:22.081 "supported_io_types": { 00:19:22.081 "read": true, 00:19:22.081 "write": true, 00:19:22.081 "unmap": true, 00:19:22.081 "flush": true, 00:19:22.081 "reset": true, 00:19:22.081 "nvme_admin": false, 00:19:22.081 "nvme_io": false, 00:19:22.081 "nvme_io_md": false, 00:19:22.081 "write_zeroes": true, 00:19:22.081 "zcopy": true, 00:19:22.081 "get_zone_info": false, 00:19:22.081 "zone_management": false, 00:19:22.081 "zone_append": false, 00:19:22.081 "compare": false, 00:19:22.081 "compare_and_write": false, 00:19:22.081 "abort": true, 00:19:22.081 "seek_hole": false, 00:19:22.081 "seek_data": false, 00:19:22.081 "copy": true, 00:19:22.081 "nvme_iov_md": false 00:19:22.081 }, 00:19:22.081 "memory_domains": [ 00:19:22.081 { 00:19:22.081 "dma_device_id": "system", 00:19:22.081 "dma_device_type": 1 00:19:22.081 }, 00:19:22.081 { 00:19:22.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.081 "dma_device_type": 2 00:19:22.081 } 00:19:22.081 ], 00:19:22.081 "driver_specific": {} 00:19:22.081 } 00:19:22.081 ] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.081 BaseBdev4 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.081 [ 00:19:22.081 { 00:19:22.081 "name": "BaseBdev4", 00:19:22.081 "aliases": [ 00:19:22.081 "682afd46-3793-455f-b098-060e3e79a882" 00:19:22.081 ], 00:19:22.081 "product_name": "Malloc disk", 00:19:22.081 "block_size": 512, 00:19:22.081 "num_blocks": 65536, 00:19:22.081 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:22.081 "assigned_rate_limits": { 00:19:22.081 "rw_ios_per_sec": 0, 00:19:22.081 "rw_mbytes_per_sec": 0, 00:19:22.081 "r_mbytes_per_sec": 0, 00:19:22.081 "w_mbytes_per_sec": 0 00:19:22.081 }, 00:19:22.081 "claimed": false, 00:19:22.081 "zoned": false, 00:19:22.081 "supported_io_types": { 00:19:22.081 "read": true, 00:19:22.081 "write": true, 00:19:22.081 "unmap": true, 00:19:22.081 "flush": true, 00:19:22.081 "reset": true, 00:19:22.081 "nvme_admin": false, 00:19:22.081 "nvme_io": false, 00:19:22.081 "nvme_io_md": false, 00:19:22.081 "write_zeroes": true, 00:19:22.081 "zcopy": true, 00:19:22.081 "get_zone_info": false, 00:19:22.081 "zone_management": false, 00:19:22.081 "zone_append": false, 00:19:22.081 "compare": false, 00:19:22.081 "compare_and_write": false, 00:19:22.081 "abort": true, 00:19:22.081 "seek_hole": false, 00:19:22.081 "seek_data": false, 00:19:22.081 "copy": true, 00:19:22.081 "nvme_iov_md": false 00:19:22.081 }, 00:19:22.081 "memory_domains": [ 00:19:22.081 { 00:19:22.081 "dma_device_id": "system", 00:19:22.081 "dma_device_type": 1 00:19:22.081 }, 00:19:22.081 { 00:19:22.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.081 "dma_device_type": 2 00:19:22.081 } 00:19:22.081 ], 00:19:22.081 "driver_specific": {} 00:19:22.081 } 00:19:22.081 ] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.081 [2024-11-20 05:29:53.752338] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:22.081 [2024-11-20 05:29:53.752502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:22.081 [2024-11-20 05:29:53.752588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.081 [2024-11-20 05:29:53.754552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:22.081 [2024-11-20 05:29:53.754672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.081 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.082 "name": "Existed_Raid", 00:19:22.082 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:22.082 "strip_size_kb": 64, 00:19:22.082 "state": "configuring", 00:19:22.082 "raid_level": "concat", 00:19:22.082 "superblock": true, 00:19:22.082 "num_base_bdevs": 4, 00:19:22.082 "num_base_bdevs_discovered": 3, 00:19:22.082 "num_base_bdevs_operational": 4, 00:19:22.082 "base_bdevs_list": [ 00:19:22.082 { 00:19:22.082 "name": "BaseBdev1", 00:19:22.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.082 "is_configured": false, 00:19:22.082 "data_offset": 0, 00:19:22.082 "data_size": 0 00:19:22.082 }, 00:19:22.082 { 00:19:22.082 "name": "BaseBdev2", 00:19:22.082 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:22.082 "is_configured": true, 00:19:22.082 "data_offset": 2048, 00:19:22.082 "data_size": 63488 00:19:22.082 }, 00:19:22.082 { 00:19:22.082 "name": "BaseBdev3", 00:19:22.082 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:22.082 "is_configured": true, 00:19:22.082 "data_offset": 2048, 00:19:22.082 "data_size": 63488 00:19:22.082 }, 00:19:22.082 { 00:19:22.082 "name": "BaseBdev4", 00:19:22.082 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:22.082 "is_configured": true, 00:19:22.082 "data_offset": 2048, 00:19:22.082 "data_size": 63488 00:19:22.082 } 00:19:22.082 ] 00:19:22.082 }' 00:19:22.082 05:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.082 05:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.339 [2024-11-20 05:29:54.076416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.339 "name": "Existed_Raid", 00:19:22.339 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:22.339 "strip_size_kb": 64, 00:19:22.339 "state": "configuring", 00:19:22.339 "raid_level": "concat", 00:19:22.339 "superblock": true, 00:19:22.339 "num_base_bdevs": 4, 00:19:22.339 "num_base_bdevs_discovered": 2, 00:19:22.339 "num_base_bdevs_operational": 4, 00:19:22.339 "base_bdevs_list": [ 00:19:22.339 { 00:19:22.339 "name": "BaseBdev1", 00:19:22.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.339 "is_configured": false, 00:19:22.339 "data_offset": 0, 00:19:22.339 "data_size": 0 00:19:22.339 }, 00:19:22.339 { 00:19:22.339 "name": null, 00:19:22.339 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:22.339 "is_configured": false, 00:19:22.339 "data_offset": 0, 00:19:22.339 "data_size": 63488 00:19:22.339 }, 00:19:22.339 { 00:19:22.339 "name": "BaseBdev3", 00:19:22.339 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:22.339 "is_configured": true, 00:19:22.339 "data_offset": 2048, 00:19:22.339 "data_size": 63488 00:19:22.339 }, 00:19:22.339 { 00:19:22.339 "name": "BaseBdev4", 00:19:22.339 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:22.339 "is_configured": true, 00:19:22.339 "data_offset": 2048, 00:19:22.339 "data_size": 63488 00:19:22.339 } 00:19:22.339 ] 00:19:22.339 }' 00:19:22.339 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.340 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.597 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:22.597 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.597 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.597 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.597 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.597 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:22.597 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:22.597 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.597 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.854 [2024-11-20 05:29:54.449218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.854 BaseBdev1 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.854 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.854 [ 00:19:22.854 { 00:19:22.854 "name": "BaseBdev1", 00:19:22.855 "aliases": [ 00:19:22.855 "62ef1487-38d5-4bd0-97b9-447f9029bf6a" 00:19:22.855 ], 00:19:22.855 "product_name": "Malloc disk", 00:19:22.855 "block_size": 512, 00:19:22.855 "num_blocks": 65536, 00:19:22.855 "uuid": "62ef1487-38d5-4bd0-97b9-447f9029bf6a", 00:19:22.855 "assigned_rate_limits": { 00:19:22.855 "rw_ios_per_sec": 0, 00:19:22.855 "rw_mbytes_per_sec": 0, 00:19:22.855 "r_mbytes_per_sec": 0, 00:19:22.855 "w_mbytes_per_sec": 0 00:19:22.855 }, 00:19:22.855 "claimed": true, 00:19:22.855 "claim_type": "exclusive_write", 00:19:22.855 "zoned": false, 00:19:22.855 "supported_io_types": { 00:19:22.855 "read": true, 00:19:22.855 "write": true, 00:19:22.855 "unmap": true, 00:19:22.855 "flush": true, 00:19:22.855 "reset": true, 00:19:22.855 "nvme_admin": false, 00:19:22.855 "nvme_io": false, 00:19:22.855 "nvme_io_md": false, 00:19:22.855 "write_zeroes": true, 00:19:22.855 "zcopy": true, 00:19:22.855 "get_zone_info": false, 00:19:22.855 "zone_management": false, 00:19:22.855 "zone_append": false, 00:19:22.855 "compare": false, 00:19:22.855 "compare_and_write": false, 00:19:22.855 "abort": true, 00:19:22.855 "seek_hole": false, 00:19:22.855 "seek_data": false, 00:19:22.855 "copy": true, 00:19:22.855 "nvme_iov_md": false 00:19:22.855 }, 00:19:22.855 "memory_domains": [ 00:19:22.855 { 00:19:22.855 "dma_device_id": "system", 00:19:22.855 "dma_device_type": 1 00:19:22.855 }, 00:19:22.855 { 00:19:22.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.855 "dma_device_type": 2 00:19:22.855 } 00:19:22.855 ], 00:19:22.855 "driver_specific": {} 00:19:22.855 } 00:19:22.855 ] 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.855 "name": "Existed_Raid", 00:19:22.855 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:22.855 "strip_size_kb": 64, 00:19:22.855 "state": "configuring", 00:19:22.855 "raid_level": "concat", 00:19:22.855 "superblock": true, 00:19:22.855 "num_base_bdevs": 4, 00:19:22.855 "num_base_bdevs_discovered": 3, 00:19:22.855 "num_base_bdevs_operational": 4, 00:19:22.855 "base_bdevs_list": [ 00:19:22.855 { 00:19:22.855 "name": "BaseBdev1", 00:19:22.855 "uuid": "62ef1487-38d5-4bd0-97b9-447f9029bf6a", 00:19:22.855 "is_configured": true, 00:19:22.855 "data_offset": 2048, 00:19:22.855 "data_size": 63488 00:19:22.855 }, 00:19:22.855 { 00:19:22.855 "name": null, 00:19:22.855 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:22.855 "is_configured": false, 00:19:22.855 "data_offset": 0, 00:19:22.855 "data_size": 63488 00:19:22.855 }, 00:19:22.855 { 00:19:22.855 "name": "BaseBdev3", 00:19:22.855 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:22.855 "is_configured": true, 00:19:22.855 "data_offset": 2048, 00:19:22.855 "data_size": 63488 00:19:22.855 }, 00:19:22.855 { 00:19:22.855 "name": "BaseBdev4", 00:19:22.855 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:22.855 "is_configured": true, 00:19:22.855 "data_offset": 2048, 00:19:22.855 "data_size": 63488 00:19:22.855 } 00:19:22.855 ] 00:19:22.855 }' 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.855 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.113 [2024-11-20 05:29:54.845413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.113 "name": "Existed_Raid", 00:19:23.113 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:23.113 "strip_size_kb": 64, 00:19:23.113 "state": "configuring", 00:19:23.113 "raid_level": "concat", 00:19:23.113 "superblock": true, 00:19:23.113 "num_base_bdevs": 4, 00:19:23.113 "num_base_bdevs_discovered": 2, 00:19:23.113 "num_base_bdevs_operational": 4, 00:19:23.113 "base_bdevs_list": [ 00:19:23.113 { 00:19:23.113 "name": "BaseBdev1", 00:19:23.113 "uuid": "62ef1487-38d5-4bd0-97b9-447f9029bf6a", 00:19:23.113 "is_configured": true, 00:19:23.113 "data_offset": 2048, 00:19:23.113 "data_size": 63488 00:19:23.113 }, 00:19:23.113 { 00:19:23.113 "name": null, 00:19:23.113 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:23.113 "is_configured": false, 00:19:23.113 "data_offset": 0, 00:19:23.113 "data_size": 63488 00:19:23.113 }, 00:19:23.113 { 00:19:23.113 "name": null, 00:19:23.113 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:23.113 "is_configured": false, 00:19:23.113 "data_offset": 0, 00:19:23.113 "data_size": 63488 00:19:23.113 }, 00:19:23.113 { 00:19:23.113 "name": "BaseBdev4", 00:19:23.113 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:23.113 "is_configured": true, 00:19:23.113 "data_offset": 2048, 00:19:23.113 "data_size": 63488 00:19:23.113 } 00:19:23.113 ] 00:19:23.113 }' 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.113 05:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.371 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.371 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.371 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.371 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:23.371 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.630 [2024-11-20 05:29:55.213472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.630 "name": "Existed_Raid", 00:19:23.630 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:23.630 "strip_size_kb": 64, 00:19:23.630 "state": "configuring", 00:19:23.630 "raid_level": "concat", 00:19:23.630 "superblock": true, 00:19:23.630 "num_base_bdevs": 4, 00:19:23.630 "num_base_bdevs_discovered": 3, 00:19:23.630 "num_base_bdevs_operational": 4, 00:19:23.630 "base_bdevs_list": [ 00:19:23.630 { 00:19:23.630 "name": "BaseBdev1", 00:19:23.630 "uuid": "62ef1487-38d5-4bd0-97b9-447f9029bf6a", 00:19:23.630 "is_configured": true, 00:19:23.630 "data_offset": 2048, 00:19:23.630 "data_size": 63488 00:19:23.630 }, 00:19:23.630 { 00:19:23.630 "name": null, 00:19:23.630 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:23.630 "is_configured": false, 00:19:23.630 "data_offset": 0, 00:19:23.630 "data_size": 63488 00:19:23.630 }, 00:19:23.630 { 00:19:23.630 "name": "BaseBdev3", 00:19:23.630 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:23.630 "is_configured": true, 00:19:23.630 "data_offset": 2048, 00:19:23.630 "data_size": 63488 00:19:23.630 }, 00:19:23.630 { 00:19:23.630 "name": "BaseBdev4", 00:19:23.630 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:23.630 "is_configured": true, 00:19:23.630 "data_offset": 2048, 00:19:23.630 "data_size": 63488 00:19:23.630 } 00:19:23.630 ] 00:19:23.630 }' 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.630 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.888 [2024-11-20 05:29:55.585567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.888 "name": "Existed_Raid", 00:19:23.888 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:23.888 "strip_size_kb": 64, 00:19:23.888 "state": "configuring", 00:19:23.888 "raid_level": "concat", 00:19:23.888 "superblock": true, 00:19:23.888 "num_base_bdevs": 4, 00:19:23.888 "num_base_bdevs_discovered": 2, 00:19:23.888 "num_base_bdevs_operational": 4, 00:19:23.888 "base_bdevs_list": [ 00:19:23.888 { 00:19:23.888 "name": null, 00:19:23.888 "uuid": "62ef1487-38d5-4bd0-97b9-447f9029bf6a", 00:19:23.888 "is_configured": false, 00:19:23.888 "data_offset": 0, 00:19:23.888 "data_size": 63488 00:19:23.888 }, 00:19:23.888 { 00:19:23.888 "name": null, 00:19:23.888 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:23.888 "is_configured": false, 00:19:23.888 "data_offset": 0, 00:19:23.888 "data_size": 63488 00:19:23.888 }, 00:19:23.888 { 00:19:23.888 "name": "BaseBdev3", 00:19:23.888 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:23.888 "is_configured": true, 00:19:23.888 "data_offset": 2048, 00:19:23.888 "data_size": 63488 00:19:23.888 }, 00:19:23.888 { 00:19:23.888 "name": "BaseBdev4", 00:19:23.888 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:23.888 "is_configured": true, 00:19:23.888 "data_offset": 2048, 00:19:23.888 "data_size": 63488 00:19:23.888 } 00:19:23.888 ] 00:19:23.888 }' 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.888 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.147 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:24.147 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.147 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.147 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.405 [2024-11-20 05:29:55.995332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.405 05:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.405 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.405 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.405 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.405 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.405 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.405 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.405 "name": "Existed_Raid", 00:19:24.405 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:24.405 "strip_size_kb": 64, 00:19:24.405 "state": "configuring", 00:19:24.405 "raid_level": "concat", 00:19:24.405 "superblock": true, 00:19:24.405 "num_base_bdevs": 4, 00:19:24.405 "num_base_bdevs_discovered": 3, 00:19:24.405 "num_base_bdevs_operational": 4, 00:19:24.405 "base_bdevs_list": [ 00:19:24.405 { 00:19:24.405 "name": null, 00:19:24.405 "uuid": "62ef1487-38d5-4bd0-97b9-447f9029bf6a", 00:19:24.405 "is_configured": false, 00:19:24.405 "data_offset": 0, 00:19:24.405 "data_size": 63488 00:19:24.405 }, 00:19:24.405 { 00:19:24.405 "name": "BaseBdev2", 00:19:24.405 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:24.405 "is_configured": true, 00:19:24.405 "data_offset": 2048, 00:19:24.405 "data_size": 63488 00:19:24.405 }, 00:19:24.405 { 00:19:24.405 "name": "BaseBdev3", 00:19:24.405 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:24.405 "is_configured": true, 00:19:24.405 "data_offset": 2048, 00:19:24.405 "data_size": 63488 00:19:24.405 }, 00:19:24.405 { 00:19:24.405 "name": "BaseBdev4", 00:19:24.405 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:24.405 "is_configured": true, 00:19:24.405 "data_offset": 2048, 00:19:24.405 "data_size": 63488 00:19:24.405 } 00:19:24.405 ] 00:19:24.405 }' 00:19:24.405 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.405 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.662 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.662 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.662 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.662 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:24.662 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.662 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:24.662 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.662 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.662 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 62ef1487-38d5-4bd0-97b9-447f9029bf6a 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.663 [2024-11-20 05:29:56.427978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:24.663 [2024-11-20 05:29:56.428191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:24.663 [2024-11-20 05:29:56.428202] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:24.663 NewBaseBdev 00:19:24.663 [2024-11-20 05:29:56.428450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:24.663 [2024-11-20 05:29:56.428562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:24.663 [2024-11-20 05:29:56.428572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:24.663 [2024-11-20 05:29:56.428671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.663 [ 00:19:24.663 { 00:19:24.663 "name": "NewBaseBdev", 00:19:24.663 "aliases": [ 00:19:24.663 "62ef1487-38d5-4bd0-97b9-447f9029bf6a" 00:19:24.663 ], 00:19:24.663 "product_name": "Malloc disk", 00:19:24.663 "block_size": 512, 00:19:24.663 "num_blocks": 65536, 00:19:24.663 "uuid": "62ef1487-38d5-4bd0-97b9-447f9029bf6a", 00:19:24.663 "assigned_rate_limits": { 00:19:24.663 "rw_ios_per_sec": 0, 00:19:24.663 "rw_mbytes_per_sec": 0, 00:19:24.663 "r_mbytes_per_sec": 0, 00:19:24.663 "w_mbytes_per_sec": 0 00:19:24.663 }, 00:19:24.663 "claimed": true, 00:19:24.663 "claim_type": "exclusive_write", 00:19:24.663 "zoned": false, 00:19:24.663 "supported_io_types": { 00:19:24.663 "read": true, 00:19:24.663 "write": true, 00:19:24.663 "unmap": true, 00:19:24.663 "flush": true, 00:19:24.663 "reset": true, 00:19:24.663 "nvme_admin": false, 00:19:24.663 "nvme_io": false, 00:19:24.663 "nvme_io_md": false, 00:19:24.663 "write_zeroes": true, 00:19:24.663 "zcopy": true, 00:19:24.663 "get_zone_info": false, 00:19:24.663 "zone_management": false, 00:19:24.663 "zone_append": false, 00:19:24.663 "compare": false, 00:19:24.663 "compare_and_write": false, 00:19:24.663 "abort": true, 00:19:24.663 "seek_hole": false, 00:19:24.663 "seek_data": false, 00:19:24.663 "copy": true, 00:19:24.663 "nvme_iov_md": false 00:19:24.663 }, 00:19:24.663 "memory_domains": [ 00:19:24.663 { 00:19:24.663 "dma_device_id": "system", 00:19:24.663 "dma_device_type": 1 00:19:24.663 }, 00:19:24.663 { 00:19:24.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.663 "dma_device_type": 2 00:19:24.663 } 00:19:24.663 ], 00:19:24.663 "driver_specific": {} 00:19:24.663 } 00:19:24.663 ] 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.663 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.923 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.923 "name": "Existed_Raid", 00:19:24.923 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:24.923 "strip_size_kb": 64, 00:19:24.923 "state": "online", 00:19:24.923 "raid_level": "concat", 00:19:24.923 "superblock": true, 00:19:24.923 "num_base_bdevs": 4, 00:19:24.923 "num_base_bdevs_discovered": 4, 00:19:24.923 "num_base_bdevs_operational": 4, 00:19:24.923 "base_bdevs_list": [ 00:19:24.923 { 00:19:24.923 "name": "NewBaseBdev", 00:19:24.923 "uuid": "62ef1487-38d5-4bd0-97b9-447f9029bf6a", 00:19:24.923 "is_configured": true, 00:19:24.923 "data_offset": 2048, 00:19:24.923 "data_size": 63488 00:19:24.923 }, 00:19:24.923 { 00:19:24.923 "name": "BaseBdev2", 00:19:24.923 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:24.923 "is_configured": true, 00:19:24.923 "data_offset": 2048, 00:19:24.923 "data_size": 63488 00:19:24.923 }, 00:19:24.923 { 00:19:24.923 "name": "BaseBdev3", 00:19:24.923 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:24.923 "is_configured": true, 00:19:24.923 "data_offset": 2048, 00:19:24.923 "data_size": 63488 00:19:24.923 }, 00:19:24.923 { 00:19:24.923 "name": "BaseBdev4", 00:19:24.923 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:24.923 "is_configured": true, 00:19:24.923 "data_offset": 2048, 00:19:24.923 "data_size": 63488 00:19:24.923 } 00:19:24.923 ] 00:19:24.923 }' 00:19:24.923 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.923 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.181 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:25.181 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:25.181 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:25.181 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:25.181 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.182 [2024-11-20 05:29:56.824447] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:25.182 "name": "Existed_Raid", 00:19:25.182 "aliases": [ 00:19:25.182 "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d" 00:19:25.182 ], 00:19:25.182 "product_name": "Raid Volume", 00:19:25.182 "block_size": 512, 00:19:25.182 "num_blocks": 253952, 00:19:25.182 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:25.182 "assigned_rate_limits": { 00:19:25.182 "rw_ios_per_sec": 0, 00:19:25.182 "rw_mbytes_per_sec": 0, 00:19:25.182 "r_mbytes_per_sec": 0, 00:19:25.182 "w_mbytes_per_sec": 0 00:19:25.182 }, 00:19:25.182 "claimed": false, 00:19:25.182 "zoned": false, 00:19:25.182 "supported_io_types": { 00:19:25.182 "read": true, 00:19:25.182 "write": true, 00:19:25.182 "unmap": true, 00:19:25.182 "flush": true, 00:19:25.182 "reset": true, 00:19:25.182 "nvme_admin": false, 00:19:25.182 "nvme_io": false, 00:19:25.182 "nvme_io_md": false, 00:19:25.182 "write_zeroes": true, 00:19:25.182 "zcopy": false, 00:19:25.182 "get_zone_info": false, 00:19:25.182 "zone_management": false, 00:19:25.182 "zone_append": false, 00:19:25.182 "compare": false, 00:19:25.182 "compare_and_write": false, 00:19:25.182 "abort": false, 00:19:25.182 "seek_hole": false, 00:19:25.182 "seek_data": false, 00:19:25.182 "copy": false, 00:19:25.182 "nvme_iov_md": false 00:19:25.182 }, 00:19:25.182 "memory_domains": [ 00:19:25.182 { 00:19:25.182 "dma_device_id": "system", 00:19:25.182 "dma_device_type": 1 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.182 "dma_device_type": 2 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "dma_device_id": "system", 00:19:25.182 "dma_device_type": 1 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.182 "dma_device_type": 2 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "dma_device_id": "system", 00:19:25.182 "dma_device_type": 1 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.182 "dma_device_type": 2 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "dma_device_id": "system", 00:19:25.182 "dma_device_type": 1 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.182 "dma_device_type": 2 00:19:25.182 } 00:19:25.182 ], 00:19:25.182 "driver_specific": { 00:19:25.182 "raid": { 00:19:25.182 "uuid": "25c9ae50-8bad-4bb6-b9b3-2616bb8ce03d", 00:19:25.182 "strip_size_kb": 64, 00:19:25.182 "state": "online", 00:19:25.182 "raid_level": "concat", 00:19:25.182 "superblock": true, 00:19:25.182 "num_base_bdevs": 4, 00:19:25.182 "num_base_bdevs_discovered": 4, 00:19:25.182 "num_base_bdevs_operational": 4, 00:19:25.182 "base_bdevs_list": [ 00:19:25.182 { 00:19:25.182 "name": "NewBaseBdev", 00:19:25.182 "uuid": "62ef1487-38d5-4bd0-97b9-447f9029bf6a", 00:19:25.182 "is_configured": true, 00:19:25.182 "data_offset": 2048, 00:19:25.182 "data_size": 63488 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "name": "BaseBdev2", 00:19:25.182 "uuid": "df51d1ae-4712-4859-ac15-adcfcb5f7800", 00:19:25.182 "is_configured": true, 00:19:25.182 "data_offset": 2048, 00:19:25.182 "data_size": 63488 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "name": "BaseBdev3", 00:19:25.182 "uuid": "7c6864e7-7f03-4149-8862-e572d75e2bd3", 00:19:25.182 "is_configured": true, 00:19:25.182 "data_offset": 2048, 00:19:25.182 "data_size": 63488 00:19:25.182 }, 00:19:25.182 { 00:19:25.182 "name": "BaseBdev4", 00:19:25.182 "uuid": "682afd46-3793-455f-b098-060e3e79a882", 00:19:25.182 "is_configured": true, 00:19:25.182 "data_offset": 2048, 00:19:25.182 "data_size": 63488 00:19:25.182 } 00:19:25.182 ] 00:19:25.182 } 00:19:25.182 } 00:19:25.182 }' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:25.182 BaseBdev2 00:19:25.182 BaseBdev3 00:19:25.182 BaseBdev4' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.182 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.183 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.183 05:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:25.183 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.183 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.183 05:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.183 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.183 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.183 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.183 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.183 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:25.183 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.183 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.441 [2024-11-20 05:29:57.032142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.441 [2024-11-20 05:29:57.032173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.441 [2024-11-20 05:29:57.032248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.441 [2024-11-20 05:29:57.032316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.441 [2024-11-20 05:29:57.032326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70164 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70164 ']' 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70164 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70164 00:19:25.441 killing process with pid 70164 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70164' 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70164 00:19:25.441 [2024-11-20 05:29:57.057938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.441 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70164 00:19:25.441 [2024-11-20 05:29:57.263652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:26.376 05:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:26.376 00:19:26.376 real 0m8.479s 00:19:26.376 user 0m13.556s 00:19:26.376 sys 0m1.432s 00:19:26.376 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:26.376 ************************************ 00:19:26.376 05:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.376 END TEST raid_state_function_test_sb 00:19:26.376 ************************************ 00:19:26.377 05:29:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:26.377 05:29:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:26.377 05:29:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:26.377 05:29:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.377 ************************************ 00:19:26.377 START TEST raid_superblock_test 00:19:26.377 ************************************ 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70801 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70801 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70801 ']' 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:26.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:26.377 05:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.377 [2024-11-20 05:29:57.990733] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:26.377 [2024-11-20 05:29:57.990869] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70801 ] 00:19:26.377 [2024-11-20 05:29:58.151800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.636 [2024-11-20 05:29:58.271164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.636 [2024-11-20 05:29:58.419494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.636 [2024-11-20 05:29:58.419542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.204 malloc1 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.204 [2024-11-20 05:29:58.872411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:27.204 [2024-11-20 05:29:58.872658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.204 [2024-11-20 05:29:58.872707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:27.204 [2024-11-20 05:29:58.873165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.204 [2024-11-20 05:29:58.875662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.204 [2024-11-20 05:29:58.875807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:27.204 pt1 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:27.204 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.205 malloc2 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.205 [2024-11-20 05:29:58.918787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:27.205 [2024-11-20 05:29:58.918858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.205 [2024-11-20 05:29:58.918884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:27.205 [2024-11-20 05:29:58.918893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.205 [2024-11-20 05:29:58.921253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.205 [2024-11-20 05:29:58.921294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:27.205 pt2 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.205 malloc3 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.205 [2024-11-20 05:29:58.971190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:27.205 [2024-11-20 05:29:58.971420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.205 [2024-11-20 05:29:58.971475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:27.205 [2024-11-20 05:29:58.971534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.205 [2024-11-20 05:29:58.973906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.205 [2024-11-20 05:29:58.974036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:27.205 pt3 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.205 05:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.205 malloc4 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.205 [2024-11-20 05:29:59.013539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:27.205 [2024-11-20 05:29:59.013717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.205 [2024-11-20 05:29:59.013758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:27.205 [2024-11-20 05:29:59.013816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.205 [2024-11-20 05:29:59.016150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.205 [2024-11-20 05:29:59.016268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:27.205 pt4 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.205 [2024-11-20 05:29:59.025622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:27.205 [2024-11-20 05:29:59.027650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:27.205 [2024-11-20 05:29:59.027721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:27.205 [2024-11-20 05:29:59.027802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:27.205 [2024-11-20 05:29:59.028015] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:27.205 [2024-11-20 05:29:59.028026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:27.205 [2024-11-20 05:29:59.028336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:27.205 [2024-11-20 05:29:59.028517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:27.205 [2024-11-20 05:29:59.028528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:27.205 [2024-11-20 05:29:59.028692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.205 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.465 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.465 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.465 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.465 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.465 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.465 "name": "raid_bdev1", 00:19:27.465 "uuid": "d92125d8-9e32-4da9-bb68-da6e4e5d8d11", 00:19:27.465 "strip_size_kb": 64, 00:19:27.465 "state": "online", 00:19:27.465 "raid_level": "concat", 00:19:27.465 "superblock": true, 00:19:27.465 "num_base_bdevs": 4, 00:19:27.465 "num_base_bdevs_discovered": 4, 00:19:27.465 "num_base_bdevs_operational": 4, 00:19:27.465 "base_bdevs_list": [ 00:19:27.465 { 00:19:27.465 "name": "pt1", 00:19:27.465 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.465 "is_configured": true, 00:19:27.465 "data_offset": 2048, 00:19:27.465 "data_size": 63488 00:19:27.465 }, 00:19:27.465 { 00:19:27.465 "name": "pt2", 00:19:27.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.465 "is_configured": true, 00:19:27.465 "data_offset": 2048, 00:19:27.465 "data_size": 63488 00:19:27.465 }, 00:19:27.465 { 00:19:27.465 "name": "pt3", 00:19:27.465 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:27.465 "is_configured": true, 00:19:27.465 "data_offset": 2048, 00:19:27.465 "data_size": 63488 00:19:27.465 }, 00:19:27.465 { 00:19:27.465 "name": "pt4", 00:19:27.465 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:27.465 "is_configured": true, 00:19:27.465 "data_offset": 2048, 00:19:27.465 "data_size": 63488 00:19:27.465 } 00:19:27.465 ] 00:19:27.465 }' 00:19:27.465 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.465 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:27.724 [2024-11-20 05:29:59.350048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:27.724 "name": "raid_bdev1", 00:19:27.724 "aliases": [ 00:19:27.724 "d92125d8-9e32-4da9-bb68-da6e4e5d8d11" 00:19:27.724 ], 00:19:27.724 "product_name": "Raid Volume", 00:19:27.724 "block_size": 512, 00:19:27.724 "num_blocks": 253952, 00:19:27.724 "uuid": "d92125d8-9e32-4da9-bb68-da6e4e5d8d11", 00:19:27.724 "assigned_rate_limits": { 00:19:27.724 "rw_ios_per_sec": 0, 00:19:27.724 "rw_mbytes_per_sec": 0, 00:19:27.724 "r_mbytes_per_sec": 0, 00:19:27.724 "w_mbytes_per_sec": 0 00:19:27.724 }, 00:19:27.724 "claimed": false, 00:19:27.724 "zoned": false, 00:19:27.724 "supported_io_types": { 00:19:27.724 "read": true, 00:19:27.724 "write": true, 00:19:27.724 "unmap": true, 00:19:27.724 "flush": true, 00:19:27.724 "reset": true, 00:19:27.724 "nvme_admin": false, 00:19:27.724 "nvme_io": false, 00:19:27.724 "nvme_io_md": false, 00:19:27.724 "write_zeroes": true, 00:19:27.724 "zcopy": false, 00:19:27.724 "get_zone_info": false, 00:19:27.724 "zone_management": false, 00:19:27.724 "zone_append": false, 00:19:27.724 "compare": false, 00:19:27.724 "compare_and_write": false, 00:19:27.724 "abort": false, 00:19:27.724 "seek_hole": false, 00:19:27.724 "seek_data": false, 00:19:27.724 "copy": false, 00:19:27.724 "nvme_iov_md": false 00:19:27.724 }, 00:19:27.724 "memory_domains": [ 00:19:27.724 { 00:19:27.724 "dma_device_id": "system", 00:19:27.724 "dma_device_type": 1 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.724 "dma_device_type": 2 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "dma_device_id": "system", 00:19:27.724 "dma_device_type": 1 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.724 "dma_device_type": 2 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "dma_device_id": "system", 00:19:27.724 "dma_device_type": 1 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.724 "dma_device_type": 2 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "dma_device_id": "system", 00:19:27.724 "dma_device_type": 1 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.724 "dma_device_type": 2 00:19:27.724 } 00:19:27.724 ], 00:19:27.724 "driver_specific": { 00:19:27.724 "raid": { 00:19:27.724 "uuid": "d92125d8-9e32-4da9-bb68-da6e4e5d8d11", 00:19:27.724 "strip_size_kb": 64, 00:19:27.724 "state": "online", 00:19:27.724 "raid_level": "concat", 00:19:27.724 "superblock": true, 00:19:27.724 "num_base_bdevs": 4, 00:19:27.724 "num_base_bdevs_discovered": 4, 00:19:27.724 "num_base_bdevs_operational": 4, 00:19:27.724 "base_bdevs_list": [ 00:19:27.724 { 00:19:27.724 "name": "pt1", 00:19:27.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.724 "is_configured": true, 00:19:27.724 "data_offset": 2048, 00:19:27.724 "data_size": 63488 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "name": "pt2", 00:19:27.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.724 "is_configured": true, 00:19:27.724 "data_offset": 2048, 00:19:27.724 "data_size": 63488 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "name": "pt3", 00:19:27.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:27.724 "is_configured": true, 00:19:27.724 "data_offset": 2048, 00:19:27.724 "data_size": 63488 00:19:27.724 }, 00:19:27.724 { 00:19:27.724 "name": "pt4", 00:19:27.724 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:27.724 "is_configured": true, 00:19:27.724 "data_offset": 2048, 00:19:27.724 "data_size": 63488 00:19:27.724 } 00:19:27.724 ] 00:19:27.724 } 00:19:27.724 } 00:19:27.724 }' 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:27.724 pt2 00:19:27.724 pt3 00:19:27.724 pt4' 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.724 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.725 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:27.983 [2024-11-20 05:29:59.590079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d92125d8-9e32-4da9-bb68-da6e4e5d8d11 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d92125d8-9e32-4da9-bb68-da6e4e5d8d11 ']' 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 [2024-11-20 05:29:59.617755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.983 [2024-11-20 05:29:59.617785] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.983 [2024-11-20 05:29:59.617872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.983 [2024-11-20 05:29:59.617949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.983 [2024-11-20 05:29:59.617965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:27.983 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.984 [2024-11-20 05:29:59.729807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:27.984 [2024-11-20 05:29:59.731874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:27.984 [2024-11-20 05:29:59.731927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:27.984 [2024-11-20 05:29:59.731962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:27.984 [2024-11-20 05:29:59.732017] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:27.984 [2024-11-20 05:29:59.732077] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:27.984 [2024-11-20 05:29:59.732097] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:27.984 [2024-11-20 05:29:59.732117] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:27.984 [2024-11-20 05:29:59.732130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.984 [2024-11-20 05:29:59.732142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:27.984 request: 00:19:27.984 { 00:19:27.984 "name": "raid_bdev1", 00:19:27.984 "raid_level": "concat", 00:19:27.984 "base_bdevs": [ 00:19:27.984 "malloc1", 00:19:27.984 "malloc2", 00:19:27.984 "malloc3", 00:19:27.984 "malloc4" 00:19:27.984 ], 00:19:27.984 "strip_size_kb": 64, 00:19:27.984 "superblock": false, 00:19:27.984 "method": "bdev_raid_create", 00:19:27.984 "req_id": 1 00:19:27.984 } 00:19:27.984 Got JSON-RPC error response 00:19:27.984 response: 00:19:27.984 { 00:19:27.984 "code": -17, 00:19:27.984 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:27.984 } 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.984 [2024-11-20 05:29:59.785775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:27.984 [2024-11-20 05:29:59.785842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.984 [2024-11-20 05:29:59.785861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:27.984 [2024-11-20 05:29:59.785871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.984 [2024-11-20 05:29:59.788212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.984 [2024-11-20 05:29:59.788353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:27.984 [2024-11-20 05:29:59.788466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:27.984 [2024-11-20 05:29:59.788532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:27.984 pt1 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.984 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.242 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.242 "name": "raid_bdev1", 00:19:28.242 "uuid": "d92125d8-9e32-4da9-bb68-da6e4e5d8d11", 00:19:28.242 "strip_size_kb": 64, 00:19:28.242 "state": "configuring", 00:19:28.242 "raid_level": "concat", 00:19:28.242 "superblock": true, 00:19:28.242 "num_base_bdevs": 4, 00:19:28.242 "num_base_bdevs_discovered": 1, 00:19:28.242 "num_base_bdevs_operational": 4, 00:19:28.242 "base_bdevs_list": [ 00:19:28.242 { 00:19:28.242 "name": "pt1", 00:19:28.242 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:28.242 "is_configured": true, 00:19:28.242 "data_offset": 2048, 00:19:28.242 "data_size": 63488 00:19:28.242 }, 00:19:28.242 { 00:19:28.242 "name": null, 00:19:28.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.242 "is_configured": false, 00:19:28.242 "data_offset": 2048, 00:19:28.242 "data_size": 63488 00:19:28.242 }, 00:19:28.242 { 00:19:28.242 "name": null, 00:19:28.242 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:28.242 "is_configured": false, 00:19:28.242 "data_offset": 2048, 00:19:28.242 "data_size": 63488 00:19:28.242 }, 00:19:28.242 { 00:19:28.242 "name": null, 00:19:28.242 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:28.243 "is_configured": false, 00:19:28.243 "data_offset": 2048, 00:19:28.243 "data_size": 63488 00:19:28.243 } 00:19:28.243 ] 00:19:28.243 }' 00:19:28.243 05:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.243 05:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.503 [2024-11-20 05:30:00.141882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:28.503 [2024-11-20 05:30:00.141961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.503 [2024-11-20 05:30:00.141981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:28.503 [2024-11-20 05:30:00.141992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.503 [2024-11-20 05:30:00.142464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.503 [2024-11-20 05:30:00.142486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:28.503 [2024-11-20 05:30:00.142569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:28.503 [2024-11-20 05:30:00.142593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.503 pt2 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.503 [2024-11-20 05:30:00.149912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.503 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.503 "name": "raid_bdev1", 00:19:28.503 "uuid": "d92125d8-9e32-4da9-bb68-da6e4e5d8d11", 00:19:28.503 "strip_size_kb": 64, 00:19:28.503 "state": "configuring", 00:19:28.503 "raid_level": "concat", 00:19:28.503 "superblock": true, 00:19:28.503 "num_base_bdevs": 4, 00:19:28.503 "num_base_bdevs_discovered": 1, 00:19:28.503 "num_base_bdevs_operational": 4, 00:19:28.503 "base_bdevs_list": [ 00:19:28.503 { 00:19:28.503 "name": "pt1", 00:19:28.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:28.503 "is_configured": true, 00:19:28.503 "data_offset": 2048, 00:19:28.503 "data_size": 63488 00:19:28.503 }, 00:19:28.503 { 00:19:28.503 "name": null, 00:19:28.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.503 "is_configured": false, 00:19:28.503 "data_offset": 0, 00:19:28.503 "data_size": 63488 00:19:28.503 }, 00:19:28.503 { 00:19:28.503 "name": null, 00:19:28.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:28.503 "is_configured": false, 00:19:28.503 "data_offset": 2048, 00:19:28.503 "data_size": 63488 00:19:28.503 }, 00:19:28.504 { 00:19:28.504 "name": null, 00:19:28.504 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:28.504 "is_configured": false, 00:19:28.504 "data_offset": 2048, 00:19:28.504 "data_size": 63488 00:19:28.504 } 00:19:28.504 ] 00:19:28.504 }' 00:19:28.504 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.504 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.827 [2024-11-20 05:30:00.473969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:28.827 [2024-11-20 05:30:00.474044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.827 [2024-11-20 05:30:00.474065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:28.827 [2024-11-20 05:30:00.474074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.827 [2024-11-20 05:30:00.474574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.827 [2024-11-20 05:30:00.474596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:28.827 [2024-11-20 05:30:00.474683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:28.827 [2024-11-20 05:30:00.474705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.827 pt2 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.827 [2024-11-20 05:30:00.481960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:28.827 [2024-11-20 05:30:00.482025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.827 [2024-11-20 05:30:00.482048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:28.827 [2024-11-20 05:30:00.482057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.827 [2024-11-20 05:30:00.482501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.827 [2024-11-20 05:30:00.482521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:28.827 [2024-11-20 05:30:00.482599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:28.827 [2024-11-20 05:30:00.482619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:28.827 pt3 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.827 [2024-11-20 05:30:00.489934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:28.827 [2024-11-20 05:30:00.489990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.827 [2024-11-20 05:30:00.490010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:28.827 [2024-11-20 05:30:00.490017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.827 [2024-11-20 05:30:00.490488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.827 [2024-11-20 05:30:00.490513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:28.827 [2024-11-20 05:30:00.490592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:28.827 [2024-11-20 05:30:00.490611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:28.827 [2024-11-20 05:30:00.490760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:28.827 [2024-11-20 05:30:00.490769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:28.827 [2024-11-20 05:30:00.491018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:28.827 [2024-11-20 05:30:00.491163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:28.827 [2024-11-20 05:30:00.491174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:28.827 [2024-11-20 05:30:00.491302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.827 pt4 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:28.827 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.828 "name": "raid_bdev1", 00:19:28.828 "uuid": "d92125d8-9e32-4da9-bb68-da6e4e5d8d11", 00:19:28.828 "strip_size_kb": 64, 00:19:28.828 "state": "online", 00:19:28.828 "raid_level": "concat", 00:19:28.828 "superblock": true, 00:19:28.828 "num_base_bdevs": 4, 00:19:28.828 "num_base_bdevs_discovered": 4, 00:19:28.828 "num_base_bdevs_operational": 4, 00:19:28.828 "base_bdevs_list": [ 00:19:28.828 { 00:19:28.828 "name": "pt1", 00:19:28.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:28.828 "is_configured": true, 00:19:28.828 "data_offset": 2048, 00:19:28.828 "data_size": 63488 00:19:28.828 }, 00:19:28.828 { 00:19:28.828 "name": "pt2", 00:19:28.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.828 "is_configured": true, 00:19:28.828 "data_offset": 2048, 00:19:28.828 "data_size": 63488 00:19:28.828 }, 00:19:28.828 { 00:19:28.828 "name": "pt3", 00:19:28.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:28.828 "is_configured": true, 00:19:28.828 "data_offset": 2048, 00:19:28.828 "data_size": 63488 00:19:28.828 }, 00:19:28.828 { 00:19:28.828 "name": "pt4", 00:19:28.828 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:28.828 "is_configured": true, 00:19:28.828 "data_offset": 2048, 00:19:28.828 "data_size": 63488 00:19:28.828 } 00:19:28.828 ] 00:19:28.828 }' 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.828 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:29.101 [2024-11-20 05:30:00.802428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.101 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:29.101 "name": "raid_bdev1", 00:19:29.101 "aliases": [ 00:19:29.101 "d92125d8-9e32-4da9-bb68-da6e4e5d8d11" 00:19:29.101 ], 00:19:29.101 "product_name": "Raid Volume", 00:19:29.101 "block_size": 512, 00:19:29.101 "num_blocks": 253952, 00:19:29.101 "uuid": "d92125d8-9e32-4da9-bb68-da6e4e5d8d11", 00:19:29.101 "assigned_rate_limits": { 00:19:29.101 "rw_ios_per_sec": 0, 00:19:29.101 "rw_mbytes_per_sec": 0, 00:19:29.101 "r_mbytes_per_sec": 0, 00:19:29.101 "w_mbytes_per_sec": 0 00:19:29.101 }, 00:19:29.101 "claimed": false, 00:19:29.101 "zoned": false, 00:19:29.101 "supported_io_types": { 00:19:29.101 "read": true, 00:19:29.101 "write": true, 00:19:29.101 "unmap": true, 00:19:29.101 "flush": true, 00:19:29.101 "reset": true, 00:19:29.101 "nvme_admin": false, 00:19:29.101 "nvme_io": false, 00:19:29.101 "nvme_io_md": false, 00:19:29.101 "write_zeroes": true, 00:19:29.101 "zcopy": false, 00:19:29.101 "get_zone_info": false, 00:19:29.101 "zone_management": false, 00:19:29.101 "zone_append": false, 00:19:29.101 "compare": false, 00:19:29.101 "compare_and_write": false, 00:19:29.101 "abort": false, 00:19:29.101 "seek_hole": false, 00:19:29.101 "seek_data": false, 00:19:29.101 "copy": false, 00:19:29.101 "nvme_iov_md": false 00:19:29.101 }, 00:19:29.101 "memory_domains": [ 00:19:29.101 { 00:19:29.101 "dma_device_id": "system", 00:19:29.101 "dma_device_type": 1 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.101 "dma_device_type": 2 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "dma_device_id": "system", 00:19:29.101 "dma_device_type": 1 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.101 "dma_device_type": 2 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "dma_device_id": "system", 00:19:29.101 "dma_device_type": 1 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.101 "dma_device_type": 2 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "dma_device_id": "system", 00:19:29.101 "dma_device_type": 1 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.101 "dma_device_type": 2 00:19:29.101 } 00:19:29.101 ], 00:19:29.101 "driver_specific": { 00:19:29.101 "raid": { 00:19:29.101 "uuid": "d92125d8-9e32-4da9-bb68-da6e4e5d8d11", 00:19:29.101 "strip_size_kb": 64, 00:19:29.101 "state": "online", 00:19:29.101 "raid_level": "concat", 00:19:29.101 "superblock": true, 00:19:29.101 "num_base_bdevs": 4, 00:19:29.101 "num_base_bdevs_discovered": 4, 00:19:29.101 "num_base_bdevs_operational": 4, 00:19:29.101 "base_bdevs_list": [ 00:19:29.101 { 00:19:29.101 "name": "pt1", 00:19:29.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.101 "is_configured": true, 00:19:29.101 "data_offset": 2048, 00:19:29.101 "data_size": 63488 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "name": "pt2", 00:19:29.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.101 "is_configured": true, 00:19:29.101 "data_offset": 2048, 00:19:29.101 "data_size": 63488 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "name": "pt3", 00:19:29.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:29.101 "is_configured": true, 00:19:29.101 "data_offset": 2048, 00:19:29.101 "data_size": 63488 00:19:29.101 }, 00:19:29.101 { 00:19:29.101 "name": "pt4", 00:19:29.101 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:29.101 "is_configured": true, 00:19:29.102 "data_offset": 2048, 00:19:29.102 "data_size": 63488 00:19:29.102 } 00:19:29.102 ] 00:19:29.102 } 00:19:29.102 } 00:19:29.102 }' 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:29.102 pt2 00:19:29.102 pt3 00:19:29.102 pt4' 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.102 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.361 05:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.361 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.361 05:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:29.361 05:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:29.361 05:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:29.361 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.362 [2024-11-20 05:30:01.026426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d92125d8-9e32-4da9-bb68-da6e4e5d8d11 '!=' d92125d8-9e32-4da9-bb68-da6e4e5d8d11 ']' 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70801 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70801 ']' 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70801 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70801 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70801' 00:19:29.362 killing process with pid 70801 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70801 00:19:29.362 [2024-11-20 05:30:01.077686] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:29.362 05:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70801 00:19:29.362 [2024-11-20 05:30:01.077935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.362 [2024-11-20 05:30:01.078070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.362 [2024-11-20 05:30:01.078140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:29.621 [2024-11-20 05:30:01.342948] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:30.187 05:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:30.187 ************************************ 00:19:30.187 END TEST raid_superblock_test 00:19:30.187 ************************************ 00:19:30.187 00:19:30.187 real 0m4.092s 00:19:30.187 user 0m5.827s 00:19:30.187 sys 0m0.711s 00:19:30.187 05:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:30.187 05:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.446 05:30:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:19:30.446 05:30:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:30.446 05:30:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:30.446 05:30:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.446 ************************************ 00:19:30.446 START TEST raid_read_error_test 00:19:30.446 ************************************ 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YUkMvb9JBA 00:19:30.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71049 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71049 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71049 ']' 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.446 05:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.447 05:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:30.447 [2024-11-20 05:30:02.139950] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:30.447 [2024-11-20 05:30:02.140273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71049 ] 00:19:30.704 [2024-11-20 05:30:02.298564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.704 [2024-11-20 05:30:02.399665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.704 [2024-11-20 05:30:02.521386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.704 [2024-11-20 05:30:02.521610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 BaseBdev1_malloc 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 true 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 [2024-11-20 05:30:03.060163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:31.270 [2024-11-20 05:30:03.060240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.270 [2024-11-20 05:30:03.060261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:31.270 [2024-11-20 05:30:03.060272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.270 [2024-11-20 05:30:03.062270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.270 [2024-11-20 05:30:03.062320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:31.270 BaseBdev1 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 BaseBdev2_malloc 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 true 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 [2024-11-20 05:30:03.102127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:31.529 [2024-11-20 05:30:03.102203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.529 [2024-11-20 05:30:03.102222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:31.529 [2024-11-20 05:30:03.102233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.529 [2024-11-20 05:30:03.104235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.529 [2024-11-20 05:30:03.104281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:31.529 BaseBdev2 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 BaseBdev3_malloc 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 true 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 [2024-11-20 05:30:03.159796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:31.529 [2024-11-20 05:30:03.159870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.529 [2024-11-20 05:30:03.159890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:31.529 [2024-11-20 05:30:03.159899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.529 [2024-11-20 05:30:03.161902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.529 [2024-11-20 05:30:03.161946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:31.529 BaseBdev3 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 BaseBdev4_malloc 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 true 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 [2024-11-20 05:30:03.201812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:31.529 [2024-11-20 05:30:03.201880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.529 [2024-11-20 05:30:03.201901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:31.529 [2024-11-20 05:30:03.201910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.529 [2024-11-20 05:30:03.203885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.529 [2024-11-20 05:30:03.203936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:31.529 BaseBdev4 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 [2024-11-20 05:30:03.209882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.529 [2024-11-20 05:30:03.211588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:31.529 [2024-11-20 05:30:03.211659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.529 [2024-11-20 05:30:03.211715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:31.529 [2024-11-20 05:30:03.211927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:31.529 [2024-11-20 05:30:03.211939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:31.529 [2024-11-20 05:30:03.212190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:31.529 [2024-11-20 05:30:03.212321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:31.529 [2024-11-20 05:30:03.212330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:31.529 [2024-11-20 05:30:03.212499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.529 "name": "raid_bdev1", 00:19:31.529 "uuid": "576b962f-ddf4-4279-9ff4-9090e125c813", 00:19:31.529 "strip_size_kb": 64, 00:19:31.529 "state": "online", 00:19:31.529 "raid_level": "concat", 00:19:31.529 "superblock": true, 00:19:31.529 "num_base_bdevs": 4, 00:19:31.529 "num_base_bdevs_discovered": 4, 00:19:31.529 "num_base_bdevs_operational": 4, 00:19:31.529 "base_bdevs_list": [ 00:19:31.529 { 00:19:31.529 "name": "BaseBdev1", 00:19:31.529 "uuid": "6306f13b-8a33-543d-a557-4529e8fc5e20", 00:19:31.529 "is_configured": true, 00:19:31.529 "data_offset": 2048, 00:19:31.529 "data_size": 63488 00:19:31.529 }, 00:19:31.529 { 00:19:31.529 "name": "BaseBdev2", 00:19:31.529 "uuid": "e4976387-9a59-5651-99fb-9d41fd5832bf", 00:19:31.529 "is_configured": true, 00:19:31.529 "data_offset": 2048, 00:19:31.529 "data_size": 63488 00:19:31.529 }, 00:19:31.529 { 00:19:31.529 "name": "BaseBdev3", 00:19:31.529 "uuid": "ef6d6123-35a9-50d9-aba6-41107355a622", 00:19:31.529 "is_configured": true, 00:19:31.529 "data_offset": 2048, 00:19:31.529 "data_size": 63488 00:19:31.529 }, 00:19:31.529 { 00:19:31.529 "name": "BaseBdev4", 00:19:31.529 "uuid": "e4fe7620-00d4-5c98-a56e-53061e48aa06", 00:19:31.529 "is_configured": true, 00:19:31.529 "data_offset": 2048, 00:19:31.529 "data_size": 63488 00:19:31.529 } 00:19:31.529 ] 00:19:31.529 }' 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.529 05:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.884 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:31.884 05:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:31.884 [2024-11-20 05:30:03.622826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.817 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.818 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.818 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.818 "name": "raid_bdev1", 00:19:32.818 "uuid": "576b962f-ddf4-4279-9ff4-9090e125c813", 00:19:32.818 "strip_size_kb": 64, 00:19:32.818 "state": "online", 00:19:32.818 "raid_level": "concat", 00:19:32.818 "superblock": true, 00:19:32.818 "num_base_bdevs": 4, 00:19:32.818 "num_base_bdevs_discovered": 4, 00:19:32.818 "num_base_bdevs_operational": 4, 00:19:32.818 "base_bdevs_list": [ 00:19:32.818 { 00:19:32.818 "name": "BaseBdev1", 00:19:32.818 "uuid": "6306f13b-8a33-543d-a557-4529e8fc5e20", 00:19:32.818 "is_configured": true, 00:19:32.818 "data_offset": 2048, 00:19:32.818 "data_size": 63488 00:19:32.818 }, 00:19:32.818 { 00:19:32.818 "name": "BaseBdev2", 00:19:32.818 "uuid": "e4976387-9a59-5651-99fb-9d41fd5832bf", 00:19:32.818 "is_configured": true, 00:19:32.818 "data_offset": 2048, 00:19:32.818 "data_size": 63488 00:19:32.818 }, 00:19:32.818 { 00:19:32.818 "name": "BaseBdev3", 00:19:32.818 "uuid": "ef6d6123-35a9-50d9-aba6-41107355a622", 00:19:32.818 "is_configured": true, 00:19:32.818 "data_offset": 2048, 00:19:32.818 "data_size": 63488 00:19:32.818 }, 00:19:32.818 { 00:19:32.818 "name": "BaseBdev4", 00:19:32.818 "uuid": "e4fe7620-00d4-5c98-a56e-53061e48aa06", 00:19:32.818 "is_configured": true, 00:19:32.818 "data_offset": 2048, 00:19:32.818 "data_size": 63488 00:19:32.818 } 00:19:32.818 ] 00:19:32.818 }' 00:19:32.818 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.818 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.076 [2024-11-20 05:30:04.859794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.076 [2024-11-20 05:30:04.859837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.076 [2024-11-20 05:30:04.862267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.076 [2024-11-20 05:30:04.862331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.076 [2024-11-20 05:30:04.862384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.076 [2024-11-20 05:30:04.862395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:33.076 { 00:19:33.076 "results": [ 00:19:33.076 { 00:19:33.076 "job": "raid_bdev1", 00:19:33.076 "core_mask": "0x1", 00:19:33.076 "workload": "randrw", 00:19:33.076 "percentage": 50, 00:19:33.076 "status": "finished", 00:19:33.076 "queue_depth": 1, 00:19:33.076 "io_size": 131072, 00:19:33.076 "runtime": 1.235227, 00:19:33.076 "iops": 17004.971555835487, 00:19:33.076 "mibps": 2125.621444479436, 00:19:33.076 "io_failed": 1, 00:19:33.076 "io_timeout": 0, 00:19:33.076 "avg_latency_us": 81.24438116582076, 00:19:33.076 "min_latency_us": 25.6, 00:19:33.076 "max_latency_us": 1329.6246153846155 00:19:33.076 } 00:19:33.076 ], 00:19:33.076 "core_count": 1 00:19:33.076 } 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71049 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71049 ']' 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71049 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71049 00:19:33.076 killing process with pid 71049 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71049' 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71049 00:19:33.076 [2024-11-20 05:30:04.895963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.076 05:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71049 00:19:33.334 [2024-11-20 05:30:05.066179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YUkMvb9JBA 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:33.903 ************************************ 00:19:33.903 END TEST raid_read_error_test 00:19:33.903 ************************************ 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:19:33.903 00:19:33.903 real 0m3.652s 00:19:33.903 user 0m4.336s 00:19:33.903 sys 0m0.437s 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:33.903 05:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.162 05:30:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:19:34.162 05:30:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:34.162 05:30:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:34.162 05:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.162 ************************************ 00:19:34.162 START TEST raid_write_error_test 00:19:34.162 ************************************ 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zRxzSBpD7i 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71189 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71189 00:19:34.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71189 ']' 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.162 05:30:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:34.162 [2024-11-20 05:30:05.830143] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:34.162 [2024-11-20 05:30:05.830256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71189 ] 00:19:34.162 [2024-11-20 05:30:05.977866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.421 [2024-11-20 05:30:06.092353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.421 [2024-11-20 05:30:06.238814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.421 [2024-11-20 05:30:06.238878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.992 BaseBdev1_malloc 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.992 true 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.992 [2024-11-20 05:30:06.774329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:34.992 [2024-11-20 05:30:06.774407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.992 [2024-11-20 05:30:06.774433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:34.992 [2024-11-20 05:30:06.774446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.992 [2024-11-20 05:30:06.776800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.992 [2024-11-20 05:30:06.776840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:34.992 BaseBdev1 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.992 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:34.993 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:34.993 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.993 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.993 BaseBdev2_malloc 00:19:34.993 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.993 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:34.993 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.993 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.291 true 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.291 [2024-11-20 05:30:06.832651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:35.291 [2024-11-20 05:30:06.832713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.291 [2024-11-20 05:30:06.832735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:35.291 [2024-11-20 05:30:06.832746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.291 [2024-11-20 05:30:06.835064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.291 [2024-11-20 05:30:06.835105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:35.291 BaseBdev2 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.291 BaseBdev3_malloc 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.291 true 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.291 [2024-11-20 05:30:06.891253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:35.291 [2024-11-20 05:30:06.891442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.291 [2024-11-20 05:30:06.891468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:35.291 [2024-11-20 05:30:06.891479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.291 [2024-11-20 05:30:06.893770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.291 [2024-11-20 05:30:06.893806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:35.291 BaseBdev3 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.291 BaseBdev4_malloc 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.291 true 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.291 [2024-11-20 05:30:06.937750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:35.291 [2024-11-20 05:30:06.937798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.291 [2024-11-20 05:30:06.937815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:35.291 [2024-11-20 05:30:06.937826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.291 [2024-11-20 05:30:06.940081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.291 [2024-11-20 05:30:06.940122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:35.291 BaseBdev4 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.291 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.292 [2024-11-20 05:30:06.945821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.292 [2024-11-20 05:30:06.947873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:35.292 [2024-11-20 05:30:06.947953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:35.292 [2024-11-20 05:30:06.948021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:35.292 [2024-11-20 05:30:06.948259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:35.292 [2024-11-20 05:30:06.948272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:35.292 [2024-11-20 05:30:06.948550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:35.292 [2024-11-20 05:30:06.948707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:35.292 [2024-11-20 05:30:06.948720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:35.292 [2024-11-20 05:30:06.948864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.292 "name": "raid_bdev1", 00:19:35.292 "uuid": "965bd255-aae8-4083-afda-054f953bf359", 00:19:35.292 "strip_size_kb": 64, 00:19:35.292 "state": "online", 00:19:35.292 "raid_level": "concat", 00:19:35.292 "superblock": true, 00:19:35.292 "num_base_bdevs": 4, 00:19:35.292 "num_base_bdevs_discovered": 4, 00:19:35.292 "num_base_bdevs_operational": 4, 00:19:35.292 "base_bdevs_list": [ 00:19:35.292 { 00:19:35.292 "name": "BaseBdev1", 00:19:35.292 "uuid": "999aa478-fac2-5baa-9eef-c79f242ed472", 00:19:35.292 "is_configured": true, 00:19:35.292 "data_offset": 2048, 00:19:35.292 "data_size": 63488 00:19:35.292 }, 00:19:35.292 { 00:19:35.292 "name": "BaseBdev2", 00:19:35.292 "uuid": "30ecea42-52a1-5a34-b1f5-b70d0d6932c9", 00:19:35.292 "is_configured": true, 00:19:35.292 "data_offset": 2048, 00:19:35.292 "data_size": 63488 00:19:35.292 }, 00:19:35.292 { 00:19:35.292 "name": "BaseBdev3", 00:19:35.292 "uuid": "988d2378-eb04-5960-a284-4e2ba5f82513", 00:19:35.292 "is_configured": true, 00:19:35.292 "data_offset": 2048, 00:19:35.292 "data_size": 63488 00:19:35.292 }, 00:19:35.292 { 00:19:35.292 "name": "BaseBdev4", 00:19:35.292 "uuid": "61c21759-260e-5d23-9636-187d20f2dac8", 00:19:35.292 "is_configured": true, 00:19:35.292 "data_offset": 2048, 00:19:35.292 "data_size": 63488 00:19:35.292 } 00:19:35.292 ] 00:19:35.292 }' 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.292 05:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.551 05:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:35.551 05:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:35.551 [2024-11-20 05:30:07.346935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:36.494 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:36.494 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.494 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.494 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.494 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:36.494 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:36.494 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:19:36.494 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:36.494 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.495 "name": "raid_bdev1", 00:19:36.495 "uuid": "965bd255-aae8-4083-afda-054f953bf359", 00:19:36.495 "strip_size_kb": 64, 00:19:36.495 "state": "online", 00:19:36.495 "raid_level": "concat", 00:19:36.495 "superblock": true, 00:19:36.495 "num_base_bdevs": 4, 00:19:36.495 "num_base_bdevs_discovered": 4, 00:19:36.495 "num_base_bdevs_operational": 4, 00:19:36.495 "base_bdevs_list": [ 00:19:36.495 { 00:19:36.495 "name": "BaseBdev1", 00:19:36.495 "uuid": "999aa478-fac2-5baa-9eef-c79f242ed472", 00:19:36.495 "is_configured": true, 00:19:36.495 "data_offset": 2048, 00:19:36.495 "data_size": 63488 00:19:36.495 }, 00:19:36.495 { 00:19:36.495 "name": "BaseBdev2", 00:19:36.495 "uuid": "30ecea42-52a1-5a34-b1f5-b70d0d6932c9", 00:19:36.495 "is_configured": true, 00:19:36.495 "data_offset": 2048, 00:19:36.495 "data_size": 63488 00:19:36.495 }, 00:19:36.495 { 00:19:36.495 "name": "BaseBdev3", 00:19:36.495 "uuid": "988d2378-eb04-5960-a284-4e2ba5f82513", 00:19:36.495 "is_configured": true, 00:19:36.495 "data_offset": 2048, 00:19:36.495 "data_size": 63488 00:19:36.495 }, 00:19:36.495 { 00:19:36.495 "name": "BaseBdev4", 00:19:36.495 "uuid": "61c21759-260e-5d23-9636-187d20f2dac8", 00:19:36.495 "is_configured": true, 00:19:36.495 "data_offset": 2048, 00:19:36.495 "data_size": 63488 00:19:36.495 } 00:19:36.495 ] 00:19:36.495 }' 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.495 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.754 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.754 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.754 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.754 [2024-11-20 05:30:08.585028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.754 [2024-11-20 05:30:08.585173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.016 [2024-11-20 05:30:08.588269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.016 [2024-11-20 05:30:08.588449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.016 [2024-11-20 05:30:08.588508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.016 [2024-11-20 05:30:08.588520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:37.016 { 00:19:37.016 "results": [ 00:19:37.016 { 00:19:37.016 "job": "raid_bdev1", 00:19:37.016 "core_mask": "0x1", 00:19:37.016 "workload": "randrw", 00:19:37.016 "percentage": 50, 00:19:37.016 "status": "finished", 00:19:37.016 "queue_depth": 1, 00:19:37.016 "io_size": 131072, 00:19:37.016 "runtime": 1.236164, 00:19:37.016 "iops": 13908.348730427355, 00:19:37.016 "mibps": 1738.5435913034194, 00:19:37.016 "io_failed": 1, 00:19:37.016 "io_timeout": 0, 00:19:37.016 "avg_latency_us": 98.97971277995006, 00:19:37.016 "min_latency_us": 33.47692307692308, 00:19:37.016 "max_latency_us": 1701.4153846153847 00:19:37.016 } 00:19:37.016 ], 00:19:37.016 "core_count": 1 00:19:37.016 } 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71189 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71189 ']' 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71189 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71189 00:19:37.016 killing process with pid 71189 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71189' 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71189 00:19:37.016 05:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71189 00:19:37.016 [2024-11-20 05:30:08.613685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.016 [2024-11-20 05:30:08.830452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zRxzSBpD7i 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:37.978 ************************************ 00:19:37.978 END TEST raid_write_error_test 00:19:37.978 ************************************ 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:19:37.978 00:19:37.978 real 0m3.870s 00:19:37.978 user 0m4.516s 00:19:37.978 sys 0m0.468s 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:37.978 05:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.978 05:30:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:37.978 05:30:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:37.978 05:30:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:37.978 05:30:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:37.978 05:30:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.978 ************************************ 00:19:37.979 START TEST raid_state_function_test 00:19:37.979 ************************************ 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71323 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71323' 00:19:37.979 Process raid pid: 71323 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71323 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:37.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71323 ']' 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.979 05:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.979 [2024-11-20 05:30:09.759883] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:37.979 [2024-11-20 05:30:09.760018] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.239 [2024-11-20 05:30:09.928976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.239 [2024-11-20 05:30:10.046400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.500 [2024-11-20 05:30:10.197874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.500 [2024-11-20 05:30:10.197923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.761 [2024-11-20 05:30:10.573107] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.761 [2024-11-20 05:30:10.573167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.761 [2024-11-20 05:30:10.573179] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.761 [2024-11-20 05:30:10.573188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.761 [2024-11-20 05:30:10.573195] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:38.761 [2024-11-20 05:30:10.573204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:38.761 [2024-11-20 05:30:10.573211] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:38.761 [2024-11-20 05:30:10.573220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.761 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.024 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.024 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.024 "name": "Existed_Raid", 00:19:39.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.024 "strip_size_kb": 0, 00:19:39.024 "state": "configuring", 00:19:39.024 "raid_level": "raid1", 00:19:39.024 "superblock": false, 00:19:39.024 "num_base_bdevs": 4, 00:19:39.024 "num_base_bdevs_discovered": 0, 00:19:39.024 "num_base_bdevs_operational": 4, 00:19:39.024 "base_bdevs_list": [ 00:19:39.024 { 00:19:39.024 "name": "BaseBdev1", 00:19:39.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.024 "is_configured": false, 00:19:39.024 "data_offset": 0, 00:19:39.024 "data_size": 0 00:19:39.024 }, 00:19:39.024 { 00:19:39.024 "name": "BaseBdev2", 00:19:39.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.024 "is_configured": false, 00:19:39.024 "data_offset": 0, 00:19:39.024 "data_size": 0 00:19:39.024 }, 00:19:39.024 { 00:19:39.024 "name": "BaseBdev3", 00:19:39.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.024 "is_configured": false, 00:19:39.024 "data_offset": 0, 00:19:39.024 "data_size": 0 00:19:39.024 }, 00:19:39.024 { 00:19:39.024 "name": "BaseBdev4", 00:19:39.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.024 "is_configured": false, 00:19:39.024 "data_offset": 0, 00:19:39.024 "data_size": 0 00:19:39.024 } 00:19:39.024 ] 00:19:39.024 }' 00:19:39.024 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.024 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.286 [2024-11-20 05:30:10.901147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.286 [2024-11-20 05:30:10.901192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.286 [2024-11-20 05:30:10.913147] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:39.286 [2024-11-20 05:30:10.913196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:39.286 [2024-11-20 05:30:10.913206] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:39.286 [2024-11-20 05:30:10.913217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:39.286 [2024-11-20 05:30:10.913224] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:39.286 [2024-11-20 05:30:10.913233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:39.286 [2024-11-20 05:30:10.913241] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:39.286 [2024-11-20 05:30:10.913250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.286 [2024-11-20 05:30:10.948279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.286 BaseBdev1 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.286 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.286 [ 00:19:39.286 { 00:19:39.287 "name": "BaseBdev1", 00:19:39.287 "aliases": [ 00:19:39.287 "d7b7d6b5-b8e9-432a-b7af-fd2bed384dad" 00:19:39.287 ], 00:19:39.287 "product_name": "Malloc disk", 00:19:39.287 "block_size": 512, 00:19:39.287 "num_blocks": 65536, 00:19:39.287 "uuid": "d7b7d6b5-b8e9-432a-b7af-fd2bed384dad", 00:19:39.287 "assigned_rate_limits": { 00:19:39.287 "rw_ios_per_sec": 0, 00:19:39.287 "rw_mbytes_per_sec": 0, 00:19:39.287 "r_mbytes_per_sec": 0, 00:19:39.287 "w_mbytes_per_sec": 0 00:19:39.287 }, 00:19:39.287 "claimed": true, 00:19:39.287 "claim_type": "exclusive_write", 00:19:39.287 "zoned": false, 00:19:39.287 "supported_io_types": { 00:19:39.287 "read": true, 00:19:39.287 "write": true, 00:19:39.287 "unmap": true, 00:19:39.287 "flush": true, 00:19:39.287 "reset": true, 00:19:39.287 "nvme_admin": false, 00:19:39.287 "nvme_io": false, 00:19:39.287 "nvme_io_md": false, 00:19:39.287 "write_zeroes": true, 00:19:39.287 "zcopy": true, 00:19:39.287 "get_zone_info": false, 00:19:39.287 "zone_management": false, 00:19:39.287 "zone_append": false, 00:19:39.287 "compare": false, 00:19:39.287 "compare_and_write": false, 00:19:39.287 "abort": true, 00:19:39.287 "seek_hole": false, 00:19:39.287 "seek_data": false, 00:19:39.287 "copy": true, 00:19:39.287 "nvme_iov_md": false 00:19:39.287 }, 00:19:39.287 "memory_domains": [ 00:19:39.287 { 00:19:39.287 "dma_device_id": "system", 00:19:39.287 "dma_device_type": 1 00:19:39.287 }, 00:19:39.287 { 00:19:39.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.287 "dma_device_type": 2 00:19:39.287 } 00:19:39.287 ], 00:19:39.287 "driver_specific": {} 00:19:39.287 } 00:19:39.287 ] 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.287 05:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.287 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.287 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.287 "name": "Existed_Raid", 00:19:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.287 "strip_size_kb": 0, 00:19:39.287 "state": "configuring", 00:19:39.287 "raid_level": "raid1", 00:19:39.287 "superblock": false, 00:19:39.287 "num_base_bdevs": 4, 00:19:39.287 "num_base_bdevs_discovered": 1, 00:19:39.287 "num_base_bdevs_operational": 4, 00:19:39.287 "base_bdevs_list": [ 00:19:39.287 { 00:19:39.287 "name": "BaseBdev1", 00:19:39.287 "uuid": "d7b7d6b5-b8e9-432a-b7af-fd2bed384dad", 00:19:39.287 "is_configured": true, 00:19:39.287 "data_offset": 0, 00:19:39.287 "data_size": 65536 00:19:39.287 }, 00:19:39.287 { 00:19:39.287 "name": "BaseBdev2", 00:19:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.287 "is_configured": false, 00:19:39.287 "data_offset": 0, 00:19:39.287 "data_size": 0 00:19:39.287 }, 00:19:39.287 { 00:19:39.287 "name": "BaseBdev3", 00:19:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.287 "is_configured": false, 00:19:39.287 "data_offset": 0, 00:19:39.287 "data_size": 0 00:19:39.287 }, 00:19:39.287 { 00:19:39.287 "name": "BaseBdev4", 00:19:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.287 "is_configured": false, 00:19:39.287 "data_offset": 0, 00:19:39.287 "data_size": 0 00:19:39.287 } 00:19:39.287 ] 00:19:39.287 }' 00:19:39.287 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.287 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.548 [2024-11-20 05:30:11.272413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.548 [2024-11-20 05:30:11.272468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.548 [2024-11-20 05:30:11.280457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.548 [2024-11-20 05:30:11.282430] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:39.548 [2024-11-20 05:30:11.282473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:39.548 [2024-11-20 05:30:11.282482] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:39.548 [2024-11-20 05:30:11.282494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:39.548 [2024-11-20 05:30:11.282501] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:39.548 [2024-11-20 05:30:11.282510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.548 "name": "Existed_Raid", 00:19:39.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.548 "strip_size_kb": 0, 00:19:39.548 "state": "configuring", 00:19:39.548 "raid_level": "raid1", 00:19:39.548 "superblock": false, 00:19:39.548 "num_base_bdevs": 4, 00:19:39.548 "num_base_bdevs_discovered": 1, 00:19:39.548 "num_base_bdevs_operational": 4, 00:19:39.548 "base_bdevs_list": [ 00:19:39.548 { 00:19:39.548 "name": "BaseBdev1", 00:19:39.548 "uuid": "d7b7d6b5-b8e9-432a-b7af-fd2bed384dad", 00:19:39.548 "is_configured": true, 00:19:39.548 "data_offset": 0, 00:19:39.548 "data_size": 65536 00:19:39.548 }, 00:19:39.548 { 00:19:39.548 "name": "BaseBdev2", 00:19:39.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.548 "is_configured": false, 00:19:39.548 "data_offset": 0, 00:19:39.548 "data_size": 0 00:19:39.548 }, 00:19:39.548 { 00:19:39.548 "name": "BaseBdev3", 00:19:39.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.548 "is_configured": false, 00:19:39.548 "data_offset": 0, 00:19:39.548 "data_size": 0 00:19:39.548 }, 00:19:39.548 { 00:19:39.548 "name": "BaseBdev4", 00:19:39.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.548 "is_configured": false, 00:19:39.548 "data_offset": 0, 00:19:39.548 "data_size": 0 00:19:39.548 } 00:19:39.548 ] 00:19:39.548 }' 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.548 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.811 [2024-11-20 05:30:11.617482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:39.811 BaseBdev2 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.811 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.811 [ 00:19:39.811 { 00:19:39.811 "name": "BaseBdev2", 00:19:39.811 "aliases": [ 00:19:39.811 "5da55c11-60bf-4756-bdc9-9aecaba5dd99" 00:19:39.811 ], 00:19:39.811 "product_name": "Malloc disk", 00:19:39.811 "block_size": 512, 00:19:39.811 "num_blocks": 65536, 00:19:39.811 "uuid": "5da55c11-60bf-4756-bdc9-9aecaba5dd99", 00:19:39.811 "assigned_rate_limits": { 00:19:39.811 "rw_ios_per_sec": 0, 00:19:39.811 "rw_mbytes_per_sec": 0, 00:19:39.811 "r_mbytes_per_sec": 0, 00:19:39.811 "w_mbytes_per_sec": 0 00:19:39.811 }, 00:19:39.811 "claimed": true, 00:19:39.811 "claim_type": "exclusive_write", 00:19:39.811 "zoned": false, 00:19:39.811 "supported_io_types": { 00:19:39.811 "read": true, 00:19:39.811 "write": true, 00:19:39.811 "unmap": true, 00:19:39.811 "flush": true, 00:19:39.811 "reset": true, 00:19:39.811 "nvme_admin": false, 00:19:39.811 "nvme_io": false, 00:19:39.811 "nvme_io_md": false, 00:19:39.811 "write_zeroes": true, 00:19:39.812 "zcopy": true, 00:19:39.812 "get_zone_info": false, 00:19:39.812 "zone_management": false, 00:19:39.812 "zone_append": false, 00:19:39.812 "compare": false, 00:19:39.812 "compare_and_write": false, 00:19:39.812 "abort": true, 00:19:39.812 "seek_hole": false, 00:19:39.812 "seek_data": false, 00:19:39.812 "copy": true, 00:19:39.812 "nvme_iov_md": false 00:19:39.812 }, 00:19:39.812 "memory_domains": [ 00:19:39.812 { 00:19:39.812 "dma_device_id": "system", 00:19:39.812 "dma_device_type": 1 00:19:39.812 }, 00:19:39.812 { 00:19:39.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.072 "dma_device_type": 2 00:19:40.072 } 00:19:40.072 ], 00:19:40.072 "driver_specific": {} 00:19:40.072 } 00:19:40.072 ] 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.072 "name": "Existed_Raid", 00:19:40.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.072 "strip_size_kb": 0, 00:19:40.072 "state": "configuring", 00:19:40.072 "raid_level": "raid1", 00:19:40.072 "superblock": false, 00:19:40.072 "num_base_bdevs": 4, 00:19:40.072 "num_base_bdevs_discovered": 2, 00:19:40.072 "num_base_bdevs_operational": 4, 00:19:40.072 "base_bdevs_list": [ 00:19:40.072 { 00:19:40.072 "name": "BaseBdev1", 00:19:40.072 "uuid": "d7b7d6b5-b8e9-432a-b7af-fd2bed384dad", 00:19:40.072 "is_configured": true, 00:19:40.072 "data_offset": 0, 00:19:40.072 "data_size": 65536 00:19:40.072 }, 00:19:40.072 { 00:19:40.072 "name": "BaseBdev2", 00:19:40.072 "uuid": "5da55c11-60bf-4756-bdc9-9aecaba5dd99", 00:19:40.072 "is_configured": true, 00:19:40.072 "data_offset": 0, 00:19:40.072 "data_size": 65536 00:19:40.072 }, 00:19:40.072 { 00:19:40.072 "name": "BaseBdev3", 00:19:40.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.072 "is_configured": false, 00:19:40.072 "data_offset": 0, 00:19:40.072 "data_size": 0 00:19:40.072 }, 00:19:40.072 { 00:19:40.072 "name": "BaseBdev4", 00:19:40.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.072 "is_configured": false, 00:19:40.072 "data_offset": 0, 00:19:40.072 "data_size": 0 00:19:40.072 } 00:19:40.072 ] 00:19:40.072 }' 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.072 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.333 [2024-11-20 05:30:11.988696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:40.333 BaseBdev3 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.333 05:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.333 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.333 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:40.333 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.333 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.333 [ 00:19:40.333 { 00:19:40.333 "name": "BaseBdev3", 00:19:40.333 "aliases": [ 00:19:40.333 "8343e66b-fc76-49a6-9305-5666087810cb" 00:19:40.333 ], 00:19:40.333 "product_name": "Malloc disk", 00:19:40.333 "block_size": 512, 00:19:40.333 "num_blocks": 65536, 00:19:40.333 "uuid": "8343e66b-fc76-49a6-9305-5666087810cb", 00:19:40.333 "assigned_rate_limits": { 00:19:40.333 "rw_ios_per_sec": 0, 00:19:40.333 "rw_mbytes_per_sec": 0, 00:19:40.333 "r_mbytes_per_sec": 0, 00:19:40.333 "w_mbytes_per_sec": 0 00:19:40.333 }, 00:19:40.333 "claimed": true, 00:19:40.333 "claim_type": "exclusive_write", 00:19:40.333 "zoned": false, 00:19:40.333 "supported_io_types": { 00:19:40.333 "read": true, 00:19:40.333 "write": true, 00:19:40.333 "unmap": true, 00:19:40.333 "flush": true, 00:19:40.333 "reset": true, 00:19:40.333 "nvme_admin": false, 00:19:40.333 "nvme_io": false, 00:19:40.333 "nvme_io_md": false, 00:19:40.333 "write_zeroes": true, 00:19:40.333 "zcopy": true, 00:19:40.333 "get_zone_info": false, 00:19:40.333 "zone_management": false, 00:19:40.333 "zone_append": false, 00:19:40.333 "compare": false, 00:19:40.333 "compare_and_write": false, 00:19:40.333 "abort": true, 00:19:40.333 "seek_hole": false, 00:19:40.333 "seek_data": false, 00:19:40.333 "copy": true, 00:19:40.333 "nvme_iov_md": false 00:19:40.333 }, 00:19:40.333 "memory_domains": [ 00:19:40.333 { 00:19:40.333 "dma_device_id": "system", 00:19:40.333 "dma_device_type": 1 00:19:40.333 }, 00:19:40.333 { 00:19:40.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.334 "dma_device_type": 2 00:19:40.334 } 00:19:40.334 ], 00:19:40.334 "driver_specific": {} 00:19:40.334 } 00:19:40.334 ] 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.334 "name": "Existed_Raid", 00:19:40.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.334 "strip_size_kb": 0, 00:19:40.334 "state": "configuring", 00:19:40.334 "raid_level": "raid1", 00:19:40.334 "superblock": false, 00:19:40.334 "num_base_bdevs": 4, 00:19:40.334 "num_base_bdevs_discovered": 3, 00:19:40.334 "num_base_bdevs_operational": 4, 00:19:40.334 "base_bdevs_list": [ 00:19:40.334 { 00:19:40.334 "name": "BaseBdev1", 00:19:40.334 "uuid": "d7b7d6b5-b8e9-432a-b7af-fd2bed384dad", 00:19:40.334 "is_configured": true, 00:19:40.334 "data_offset": 0, 00:19:40.334 "data_size": 65536 00:19:40.334 }, 00:19:40.334 { 00:19:40.334 "name": "BaseBdev2", 00:19:40.334 "uuid": "5da55c11-60bf-4756-bdc9-9aecaba5dd99", 00:19:40.334 "is_configured": true, 00:19:40.334 "data_offset": 0, 00:19:40.334 "data_size": 65536 00:19:40.334 }, 00:19:40.334 { 00:19:40.334 "name": "BaseBdev3", 00:19:40.334 "uuid": "8343e66b-fc76-49a6-9305-5666087810cb", 00:19:40.334 "is_configured": true, 00:19:40.334 "data_offset": 0, 00:19:40.334 "data_size": 65536 00:19:40.334 }, 00:19:40.334 { 00:19:40.334 "name": "BaseBdev4", 00:19:40.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.334 "is_configured": false, 00:19:40.334 "data_offset": 0, 00:19:40.334 "data_size": 0 00:19:40.334 } 00:19:40.334 ] 00:19:40.334 }' 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.334 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.594 [2024-11-20 05:30:12.361786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:40.594 [2024-11-20 05:30:12.361839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:40.594 [2024-11-20 05:30:12.361847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:40.594 [2024-11-20 05:30:12.362131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:40.594 [2024-11-20 05:30:12.362298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:40.594 [2024-11-20 05:30:12.362309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:40.594 [2024-11-20 05:30:12.362558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.594 BaseBdev4 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.594 [ 00:19:40.594 { 00:19:40.594 "name": "BaseBdev4", 00:19:40.594 "aliases": [ 00:19:40.594 "003cdcdd-1b76-425f-8a46-4bdd46707b27" 00:19:40.594 ], 00:19:40.594 "product_name": "Malloc disk", 00:19:40.594 "block_size": 512, 00:19:40.594 "num_blocks": 65536, 00:19:40.594 "uuid": "003cdcdd-1b76-425f-8a46-4bdd46707b27", 00:19:40.594 "assigned_rate_limits": { 00:19:40.594 "rw_ios_per_sec": 0, 00:19:40.594 "rw_mbytes_per_sec": 0, 00:19:40.594 "r_mbytes_per_sec": 0, 00:19:40.594 "w_mbytes_per_sec": 0 00:19:40.594 }, 00:19:40.594 "claimed": true, 00:19:40.594 "claim_type": "exclusive_write", 00:19:40.594 "zoned": false, 00:19:40.594 "supported_io_types": { 00:19:40.594 "read": true, 00:19:40.594 "write": true, 00:19:40.594 "unmap": true, 00:19:40.594 "flush": true, 00:19:40.594 "reset": true, 00:19:40.594 "nvme_admin": false, 00:19:40.594 "nvme_io": false, 00:19:40.594 "nvme_io_md": false, 00:19:40.594 "write_zeroes": true, 00:19:40.594 "zcopy": true, 00:19:40.594 "get_zone_info": false, 00:19:40.594 "zone_management": false, 00:19:40.594 "zone_append": false, 00:19:40.594 "compare": false, 00:19:40.594 "compare_and_write": false, 00:19:40.594 "abort": true, 00:19:40.594 "seek_hole": false, 00:19:40.594 "seek_data": false, 00:19:40.594 "copy": true, 00:19:40.594 "nvme_iov_md": false 00:19:40.594 }, 00:19:40.594 "memory_domains": [ 00:19:40.594 { 00:19:40.594 "dma_device_id": "system", 00:19:40.594 "dma_device_type": 1 00:19:40.594 }, 00:19:40.594 { 00:19:40.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.594 "dma_device_type": 2 00:19:40.594 } 00:19:40.594 ], 00:19:40.594 "driver_specific": {} 00:19:40.594 } 00:19:40.594 ] 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.594 "name": "Existed_Raid", 00:19:40.594 "uuid": "52fb44d1-461a-48e2-a595-a3ae89b4fc3d", 00:19:40.594 "strip_size_kb": 0, 00:19:40.594 "state": "online", 00:19:40.594 "raid_level": "raid1", 00:19:40.594 "superblock": false, 00:19:40.594 "num_base_bdevs": 4, 00:19:40.594 "num_base_bdevs_discovered": 4, 00:19:40.594 "num_base_bdevs_operational": 4, 00:19:40.594 "base_bdevs_list": [ 00:19:40.594 { 00:19:40.594 "name": "BaseBdev1", 00:19:40.594 "uuid": "d7b7d6b5-b8e9-432a-b7af-fd2bed384dad", 00:19:40.594 "is_configured": true, 00:19:40.594 "data_offset": 0, 00:19:40.594 "data_size": 65536 00:19:40.594 }, 00:19:40.594 { 00:19:40.594 "name": "BaseBdev2", 00:19:40.594 "uuid": "5da55c11-60bf-4756-bdc9-9aecaba5dd99", 00:19:40.594 "is_configured": true, 00:19:40.594 "data_offset": 0, 00:19:40.594 "data_size": 65536 00:19:40.594 }, 00:19:40.594 { 00:19:40.594 "name": "BaseBdev3", 00:19:40.594 "uuid": "8343e66b-fc76-49a6-9305-5666087810cb", 00:19:40.594 "is_configured": true, 00:19:40.594 "data_offset": 0, 00:19:40.594 "data_size": 65536 00:19:40.594 }, 00:19:40.594 { 00:19:40.594 "name": "BaseBdev4", 00:19:40.594 "uuid": "003cdcdd-1b76-425f-8a46-4bdd46707b27", 00:19:40.594 "is_configured": true, 00:19:40.594 "data_offset": 0, 00:19:40.594 "data_size": 65536 00:19:40.594 } 00:19:40.594 ] 00:19:40.594 }' 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.594 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.855 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:40.855 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.116 [2024-11-20 05:30:12.702309] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.116 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:41.116 "name": "Existed_Raid", 00:19:41.116 "aliases": [ 00:19:41.116 "52fb44d1-461a-48e2-a595-a3ae89b4fc3d" 00:19:41.116 ], 00:19:41.116 "product_name": "Raid Volume", 00:19:41.116 "block_size": 512, 00:19:41.116 "num_blocks": 65536, 00:19:41.116 "uuid": "52fb44d1-461a-48e2-a595-a3ae89b4fc3d", 00:19:41.116 "assigned_rate_limits": { 00:19:41.116 "rw_ios_per_sec": 0, 00:19:41.116 "rw_mbytes_per_sec": 0, 00:19:41.116 "r_mbytes_per_sec": 0, 00:19:41.116 "w_mbytes_per_sec": 0 00:19:41.116 }, 00:19:41.116 "claimed": false, 00:19:41.116 "zoned": false, 00:19:41.116 "supported_io_types": { 00:19:41.116 "read": true, 00:19:41.116 "write": true, 00:19:41.116 "unmap": false, 00:19:41.116 "flush": false, 00:19:41.116 "reset": true, 00:19:41.116 "nvme_admin": false, 00:19:41.116 "nvme_io": false, 00:19:41.116 "nvme_io_md": false, 00:19:41.116 "write_zeroes": true, 00:19:41.116 "zcopy": false, 00:19:41.116 "get_zone_info": false, 00:19:41.116 "zone_management": false, 00:19:41.116 "zone_append": false, 00:19:41.116 "compare": false, 00:19:41.116 "compare_and_write": false, 00:19:41.116 "abort": false, 00:19:41.116 "seek_hole": false, 00:19:41.116 "seek_data": false, 00:19:41.116 "copy": false, 00:19:41.116 "nvme_iov_md": false 00:19:41.116 }, 00:19:41.116 "memory_domains": [ 00:19:41.116 { 00:19:41.116 "dma_device_id": "system", 00:19:41.116 "dma_device_type": 1 00:19:41.116 }, 00:19:41.116 { 00:19:41.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.116 "dma_device_type": 2 00:19:41.116 }, 00:19:41.116 { 00:19:41.116 "dma_device_id": "system", 00:19:41.116 "dma_device_type": 1 00:19:41.116 }, 00:19:41.116 { 00:19:41.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.116 "dma_device_type": 2 00:19:41.116 }, 00:19:41.116 { 00:19:41.116 "dma_device_id": "system", 00:19:41.116 "dma_device_type": 1 00:19:41.116 }, 00:19:41.116 { 00:19:41.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.116 "dma_device_type": 2 00:19:41.116 }, 00:19:41.116 { 00:19:41.116 "dma_device_id": "system", 00:19:41.116 "dma_device_type": 1 00:19:41.116 }, 00:19:41.116 { 00:19:41.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.116 "dma_device_type": 2 00:19:41.116 } 00:19:41.117 ], 00:19:41.117 "driver_specific": { 00:19:41.117 "raid": { 00:19:41.117 "uuid": "52fb44d1-461a-48e2-a595-a3ae89b4fc3d", 00:19:41.117 "strip_size_kb": 0, 00:19:41.117 "state": "online", 00:19:41.117 "raid_level": "raid1", 00:19:41.117 "superblock": false, 00:19:41.117 "num_base_bdevs": 4, 00:19:41.117 "num_base_bdevs_discovered": 4, 00:19:41.117 "num_base_bdevs_operational": 4, 00:19:41.117 "base_bdevs_list": [ 00:19:41.117 { 00:19:41.117 "name": "BaseBdev1", 00:19:41.117 "uuid": "d7b7d6b5-b8e9-432a-b7af-fd2bed384dad", 00:19:41.117 "is_configured": true, 00:19:41.117 "data_offset": 0, 00:19:41.117 "data_size": 65536 00:19:41.117 }, 00:19:41.117 { 00:19:41.117 "name": "BaseBdev2", 00:19:41.117 "uuid": "5da55c11-60bf-4756-bdc9-9aecaba5dd99", 00:19:41.117 "is_configured": true, 00:19:41.117 "data_offset": 0, 00:19:41.117 "data_size": 65536 00:19:41.117 }, 00:19:41.117 { 00:19:41.117 "name": "BaseBdev3", 00:19:41.117 "uuid": "8343e66b-fc76-49a6-9305-5666087810cb", 00:19:41.117 "is_configured": true, 00:19:41.117 "data_offset": 0, 00:19:41.117 "data_size": 65536 00:19:41.117 }, 00:19:41.117 { 00:19:41.117 "name": "BaseBdev4", 00:19:41.117 "uuid": "003cdcdd-1b76-425f-8a46-4bdd46707b27", 00:19:41.117 "is_configured": true, 00:19:41.117 "data_offset": 0, 00:19:41.117 "data_size": 65536 00:19:41.117 } 00:19:41.117 ] 00:19:41.117 } 00:19:41.117 } 00:19:41.117 }' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:41.117 BaseBdev2 00:19:41.117 BaseBdev3 00:19:41.117 BaseBdev4' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.117 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.117 [2024-11-20 05:30:12.918019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.378 05:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.378 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.378 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.378 "name": "Existed_Raid", 00:19:41.378 "uuid": "52fb44d1-461a-48e2-a595-a3ae89b4fc3d", 00:19:41.378 "strip_size_kb": 0, 00:19:41.378 "state": "online", 00:19:41.378 "raid_level": "raid1", 00:19:41.378 "superblock": false, 00:19:41.378 "num_base_bdevs": 4, 00:19:41.378 "num_base_bdevs_discovered": 3, 00:19:41.378 "num_base_bdevs_operational": 3, 00:19:41.378 "base_bdevs_list": [ 00:19:41.378 { 00:19:41.378 "name": null, 00:19:41.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.378 "is_configured": false, 00:19:41.378 "data_offset": 0, 00:19:41.378 "data_size": 65536 00:19:41.378 }, 00:19:41.378 { 00:19:41.378 "name": "BaseBdev2", 00:19:41.378 "uuid": "5da55c11-60bf-4756-bdc9-9aecaba5dd99", 00:19:41.378 "is_configured": true, 00:19:41.378 "data_offset": 0, 00:19:41.378 "data_size": 65536 00:19:41.378 }, 00:19:41.378 { 00:19:41.378 "name": "BaseBdev3", 00:19:41.378 "uuid": "8343e66b-fc76-49a6-9305-5666087810cb", 00:19:41.378 "is_configured": true, 00:19:41.378 "data_offset": 0, 00:19:41.378 "data_size": 65536 00:19:41.378 }, 00:19:41.378 { 00:19:41.378 "name": "BaseBdev4", 00:19:41.378 "uuid": "003cdcdd-1b76-425f-8a46-4bdd46707b27", 00:19:41.378 "is_configured": true, 00:19:41.378 "data_offset": 0, 00:19:41.378 "data_size": 65536 00:19:41.378 } 00:19:41.378 ] 00:19:41.378 }' 00:19:41.378 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.378 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.640 [2024-11-20 05:30:13.321217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.640 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.640 [2024-11-20 05:30:13.429734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.901 [2024-11-20 05:30:13.540677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:41.901 [2024-11-20 05:30:13.540781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.901 [2024-11-20 05:30:13.604704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.901 [2024-11-20 05:30:13.604753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.901 [2024-11-20 05:30:13.604766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.901 BaseBdev2 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.901 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.902 [ 00:19:41.902 { 00:19:41.902 "name": "BaseBdev2", 00:19:41.902 "aliases": [ 00:19:41.902 "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e" 00:19:41.902 ], 00:19:41.902 "product_name": "Malloc disk", 00:19:41.902 "block_size": 512, 00:19:41.902 "num_blocks": 65536, 00:19:41.902 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:41.902 "assigned_rate_limits": { 00:19:41.902 "rw_ios_per_sec": 0, 00:19:41.902 "rw_mbytes_per_sec": 0, 00:19:41.902 "r_mbytes_per_sec": 0, 00:19:41.902 "w_mbytes_per_sec": 0 00:19:41.902 }, 00:19:41.902 "claimed": false, 00:19:41.902 "zoned": false, 00:19:41.902 "supported_io_types": { 00:19:41.902 "read": true, 00:19:41.902 "write": true, 00:19:41.902 "unmap": true, 00:19:41.902 "flush": true, 00:19:41.902 "reset": true, 00:19:41.902 "nvme_admin": false, 00:19:41.902 "nvme_io": false, 00:19:41.902 "nvme_io_md": false, 00:19:41.902 "write_zeroes": true, 00:19:41.902 "zcopy": true, 00:19:41.902 "get_zone_info": false, 00:19:41.902 "zone_management": false, 00:19:41.902 "zone_append": false, 00:19:41.902 "compare": false, 00:19:41.902 "compare_and_write": false, 00:19:41.902 "abort": true, 00:19:41.902 "seek_hole": false, 00:19:41.902 "seek_data": false, 00:19:41.902 "copy": true, 00:19:41.902 "nvme_iov_md": false 00:19:41.902 }, 00:19:41.902 "memory_domains": [ 00:19:41.902 { 00:19:41.902 "dma_device_id": "system", 00:19:41.902 "dma_device_type": 1 00:19:41.902 }, 00:19:41.902 { 00:19:41.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.902 "dma_device_type": 2 00:19:41.902 } 00:19:41.902 ], 00:19:41.902 "driver_specific": {} 00:19:41.902 } 00:19:41.902 ] 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.902 BaseBdev3 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.902 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.161 [ 00:19:42.161 { 00:19:42.161 "name": "BaseBdev3", 00:19:42.161 "aliases": [ 00:19:42.161 "66667755-66a8-41dc-93c0-2a4fddad784b" 00:19:42.161 ], 00:19:42.161 "product_name": "Malloc disk", 00:19:42.161 "block_size": 512, 00:19:42.161 "num_blocks": 65536, 00:19:42.161 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:42.161 "assigned_rate_limits": { 00:19:42.161 "rw_ios_per_sec": 0, 00:19:42.161 "rw_mbytes_per_sec": 0, 00:19:42.161 "r_mbytes_per_sec": 0, 00:19:42.161 "w_mbytes_per_sec": 0 00:19:42.161 }, 00:19:42.161 "claimed": false, 00:19:42.161 "zoned": false, 00:19:42.161 "supported_io_types": { 00:19:42.161 "read": true, 00:19:42.161 "write": true, 00:19:42.161 "unmap": true, 00:19:42.161 "flush": true, 00:19:42.161 "reset": true, 00:19:42.161 "nvme_admin": false, 00:19:42.161 "nvme_io": false, 00:19:42.161 "nvme_io_md": false, 00:19:42.161 "write_zeroes": true, 00:19:42.161 "zcopy": true, 00:19:42.161 "get_zone_info": false, 00:19:42.161 "zone_management": false, 00:19:42.161 "zone_append": false, 00:19:42.161 "compare": false, 00:19:42.161 "compare_and_write": false, 00:19:42.161 "abort": true, 00:19:42.161 "seek_hole": false, 00:19:42.161 "seek_data": false, 00:19:42.161 "copy": true, 00:19:42.161 "nvme_iov_md": false 00:19:42.161 }, 00:19:42.161 "memory_domains": [ 00:19:42.161 { 00:19:42.161 "dma_device_id": "system", 00:19:42.161 "dma_device_type": 1 00:19:42.161 }, 00:19:42.161 { 00:19:42.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.161 "dma_device_type": 2 00:19:42.161 } 00:19:42.161 ], 00:19:42.161 "driver_specific": {} 00:19:42.161 } 00:19:42.161 ] 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.161 BaseBdev4 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:42.161 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.162 [ 00:19:42.162 { 00:19:42.162 "name": "BaseBdev4", 00:19:42.162 "aliases": [ 00:19:42.162 "72ff51f3-ce4a-4588-91b0-097b609aad31" 00:19:42.162 ], 00:19:42.162 "product_name": "Malloc disk", 00:19:42.162 "block_size": 512, 00:19:42.162 "num_blocks": 65536, 00:19:42.162 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:42.162 "assigned_rate_limits": { 00:19:42.162 "rw_ios_per_sec": 0, 00:19:42.162 "rw_mbytes_per_sec": 0, 00:19:42.162 "r_mbytes_per_sec": 0, 00:19:42.162 "w_mbytes_per_sec": 0 00:19:42.162 }, 00:19:42.162 "claimed": false, 00:19:42.162 "zoned": false, 00:19:42.162 "supported_io_types": { 00:19:42.162 "read": true, 00:19:42.162 "write": true, 00:19:42.162 "unmap": true, 00:19:42.162 "flush": true, 00:19:42.162 "reset": true, 00:19:42.162 "nvme_admin": false, 00:19:42.162 "nvme_io": false, 00:19:42.162 "nvme_io_md": false, 00:19:42.162 "write_zeroes": true, 00:19:42.162 "zcopy": true, 00:19:42.162 "get_zone_info": false, 00:19:42.162 "zone_management": false, 00:19:42.162 "zone_append": false, 00:19:42.162 "compare": false, 00:19:42.162 "compare_and_write": false, 00:19:42.162 "abort": true, 00:19:42.162 "seek_hole": false, 00:19:42.162 "seek_data": false, 00:19:42.162 "copy": true, 00:19:42.162 "nvme_iov_md": false 00:19:42.162 }, 00:19:42.162 "memory_domains": [ 00:19:42.162 { 00:19:42.162 "dma_device_id": "system", 00:19:42.162 "dma_device_type": 1 00:19:42.162 }, 00:19:42.162 { 00:19:42.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.162 "dma_device_type": 2 00:19:42.162 } 00:19:42.162 ], 00:19:42.162 "driver_specific": {} 00:19:42.162 } 00:19:42.162 ] 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.162 [2024-11-20 05:30:13.817250] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:42.162 [2024-11-20 05:30:13.817399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:42.162 [2024-11-20 05:30:13.817467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.162 [2024-11-20 05:30:13.819122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:42.162 [2024-11-20 05:30:13.819240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.162 "name": "Existed_Raid", 00:19:42.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.162 "strip_size_kb": 0, 00:19:42.162 "state": "configuring", 00:19:42.162 "raid_level": "raid1", 00:19:42.162 "superblock": false, 00:19:42.162 "num_base_bdevs": 4, 00:19:42.162 "num_base_bdevs_discovered": 3, 00:19:42.162 "num_base_bdevs_operational": 4, 00:19:42.162 "base_bdevs_list": [ 00:19:42.162 { 00:19:42.162 "name": "BaseBdev1", 00:19:42.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.162 "is_configured": false, 00:19:42.162 "data_offset": 0, 00:19:42.162 "data_size": 0 00:19:42.162 }, 00:19:42.162 { 00:19:42.162 "name": "BaseBdev2", 00:19:42.162 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:42.162 "is_configured": true, 00:19:42.162 "data_offset": 0, 00:19:42.162 "data_size": 65536 00:19:42.162 }, 00:19:42.162 { 00:19:42.162 "name": "BaseBdev3", 00:19:42.162 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:42.162 "is_configured": true, 00:19:42.162 "data_offset": 0, 00:19:42.162 "data_size": 65536 00:19:42.162 }, 00:19:42.162 { 00:19:42.162 "name": "BaseBdev4", 00:19:42.162 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:42.162 "is_configured": true, 00:19:42.162 "data_offset": 0, 00:19:42.162 "data_size": 65536 00:19:42.162 } 00:19:42.162 ] 00:19:42.162 }' 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.162 05:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.421 [2024-11-20 05:30:14.145335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.421 "name": "Existed_Raid", 00:19:42.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.421 "strip_size_kb": 0, 00:19:42.421 "state": "configuring", 00:19:42.421 "raid_level": "raid1", 00:19:42.421 "superblock": false, 00:19:42.421 "num_base_bdevs": 4, 00:19:42.421 "num_base_bdevs_discovered": 2, 00:19:42.421 "num_base_bdevs_operational": 4, 00:19:42.421 "base_bdevs_list": [ 00:19:42.421 { 00:19:42.421 "name": "BaseBdev1", 00:19:42.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.421 "is_configured": false, 00:19:42.421 "data_offset": 0, 00:19:42.421 "data_size": 0 00:19:42.421 }, 00:19:42.421 { 00:19:42.421 "name": null, 00:19:42.421 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:42.421 "is_configured": false, 00:19:42.421 "data_offset": 0, 00:19:42.421 "data_size": 65536 00:19:42.421 }, 00:19:42.421 { 00:19:42.421 "name": "BaseBdev3", 00:19:42.421 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:42.421 "is_configured": true, 00:19:42.421 "data_offset": 0, 00:19:42.421 "data_size": 65536 00:19:42.421 }, 00:19:42.421 { 00:19:42.421 "name": "BaseBdev4", 00:19:42.421 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:42.421 "is_configured": true, 00:19:42.421 "data_offset": 0, 00:19:42.421 "data_size": 65536 00:19:42.421 } 00:19:42.421 ] 00:19:42.421 }' 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.421 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.681 [2024-11-20 05:30:14.509679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.681 BaseBdev1 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.681 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.940 [ 00:19:42.940 { 00:19:42.940 "name": "BaseBdev1", 00:19:42.940 "aliases": [ 00:19:42.940 "9ddb18f1-d69b-4823-85ac-4d011854678f" 00:19:42.940 ], 00:19:42.940 "product_name": "Malloc disk", 00:19:42.940 "block_size": 512, 00:19:42.940 "num_blocks": 65536, 00:19:42.940 "uuid": "9ddb18f1-d69b-4823-85ac-4d011854678f", 00:19:42.940 "assigned_rate_limits": { 00:19:42.940 "rw_ios_per_sec": 0, 00:19:42.940 "rw_mbytes_per_sec": 0, 00:19:42.940 "r_mbytes_per_sec": 0, 00:19:42.940 "w_mbytes_per_sec": 0 00:19:42.940 }, 00:19:42.940 "claimed": true, 00:19:42.940 "claim_type": "exclusive_write", 00:19:42.940 "zoned": false, 00:19:42.940 "supported_io_types": { 00:19:42.940 "read": true, 00:19:42.940 "write": true, 00:19:42.940 "unmap": true, 00:19:42.940 "flush": true, 00:19:42.940 "reset": true, 00:19:42.940 "nvme_admin": false, 00:19:42.940 "nvme_io": false, 00:19:42.940 "nvme_io_md": false, 00:19:42.940 "write_zeroes": true, 00:19:42.940 "zcopy": true, 00:19:42.940 "get_zone_info": false, 00:19:42.940 "zone_management": false, 00:19:42.940 "zone_append": false, 00:19:42.940 "compare": false, 00:19:42.940 "compare_and_write": false, 00:19:42.940 "abort": true, 00:19:42.940 "seek_hole": false, 00:19:42.940 "seek_data": false, 00:19:42.940 "copy": true, 00:19:42.940 "nvme_iov_md": false 00:19:42.940 }, 00:19:42.940 "memory_domains": [ 00:19:42.940 { 00:19:42.940 "dma_device_id": "system", 00:19:42.940 "dma_device_type": 1 00:19:42.940 }, 00:19:42.940 { 00:19:42.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.940 "dma_device_type": 2 00:19:42.940 } 00:19:42.940 ], 00:19:42.940 "driver_specific": {} 00:19:42.940 } 00:19:42.940 ] 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.940 "name": "Existed_Raid", 00:19:42.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.940 "strip_size_kb": 0, 00:19:42.940 "state": "configuring", 00:19:42.940 "raid_level": "raid1", 00:19:42.940 "superblock": false, 00:19:42.940 "num_base_bdevs": 4, 00:19:42.940 "num_base_bdevs_discovered": 3, 00:19:42.940 "num_base_bdevs_operational": 4, 00:19:42.940 "base_bdevs_list": [ 00:19:42.940 { 00:19:42.940 "name": "BaseBdev1", 00:19:42.940 "uuid": "9ddb18f1-d69b-4823-85ac-4d011854678f", 00:19:42.940 "is_configured": true, 00:19:42.940 "data_offset": 0, 00:19:42.940 "data_size": 65536 00:19:42.940 }, 00:19:42.940 { 00:19:42.940 "name": null, 00:19:42.940 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:42.940 "is_configured": false, 00:19:42.940 "data_offset": 0, 00:19:42.940 "data_size": 65536 00:19:42.940 }, 00:19:42.940 { 00:19:42.940 "name": "BaseBdev3", 00:19:42.940 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:42.940 "is_configured": true, 00:19:42.940 "data_offset": 0, 00:19:42.940 "data_size": 65536 00:19:42.940 }, 00:19:42.940 { 00:19:42.940 "name": "BaseBdev4", 00:19:42.940 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:42.940 "is_configured": true, 00:19:42.940 "data_offset": 0, 00:19:42.940 "data_size": 65536 00:19:42.940 } 00:19:42.940 ] 00:19:42.940 }' 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.940 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.197 [2024-11-20 05:30:14.885832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.197 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.198 "name": "Existed_Raid", 00:19:43.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.198 "strip_size_kb": 0, 00:19:43.198 "state": "configuring", 00:19:43.198 "raid_level": "raid1", 00:19:43.198 "superblock": false, 00:19:43.198 "num_base_bdevs": 4, 00:19:43.198 "num_base_bdevs_discovered": 2, 00:19:43.198 "num_base_bdevs_operational": 4, 00:19:43.198 "base_bdevs_list": [ 00:19:43.198 { 00:19:43.198 "name": "BaseBdev1", 00:19:43.198 "uuid": "9ddb18f1-d69b-4823-85ac-4d011854678f", 00:19:43.198 "is_configured": true, 00:19:43.198 "data_offset": 0, 00:19:43.198 "data_size": 65536 00:19:43.198 }, 00:19:43.198 { 00:19:43.198 "name": null, 00:19:43.198 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:43.198 "is_configured": false, 00:19:43.198 "data_offset": 0, 00:19:43.198 "data_size": 65536 00:19:43.198 }, 00:19:43.198 { 00:19:43.198 "name": null, 00:19:43.198 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:43.198 "is_configured": false, 00:19:43.198 "data_offset": 0, 00:19:43.198 "data_size": 65536 00:19:43.198 }, 00:19:43.198 { 00:19:43.198 "name": "BaseBdev4", 00:19:43.198 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:43.198 "is_configured": true, 00:19:43.198 "data_offset": 0, 00:19:43.198 "data_size": 65536 00:19:43.198 } 00:19:43.198 ] 00:19:43.198 }' 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.198 05:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.455 [2024-11-20 05:30:15.265892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.455 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.712 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.712 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.712 "name": "Existed_Raid", 00:19:43.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.712 "strip_size_kb": 0, 00:19:43.712 "state": "configuring", 00:19:43.712 "raid_level": "raid1", 00:19:43.712 "superblock": false, 00:19:43.712 "num_base_bdevs": 4, 00:19:43.712 "num_base_bdevs_discovered": 3, 00:19:43.712 "num_base_bdevs_operational": 4, 00:19:43.712 "base_bdevs_list": [ 00:19:43.712 { 00:19:43.712 "name": "BaseBdev1", 00:19:43.712 "uuid": "9ddb18f1-d69b-4823-85ac-4d011854678f", 00:19:43.712 "is_configured": true, 00:19:43.712 "data_offset": 0, 00:19:43.712 "data_size": 65536 00:19:43.712 }, 00:19:43.712 { 00:19:43.712 "name": null, 00:19:43.712 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:43.712 "is_configured": false, 00:19:43.712 "data_offset": 0, 00:19:43.712 "data_size": 65536 00:19:43.712 }, 00:19:43.712 { 00:19:43.712 "name": "BaseBdev3", 00:19:43.712 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:43.712 "is_configured": true, 00:19:43.712 "data_offset": 0, 00:19:43.712 "data_size": 65536 00:19:43.712 }, 00:19:43.712 { 00:19:43.712 "name": "BaseBdev4", 00:19:43.712 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:43.712 "is_configured": true, 00:19:43.712 "data_offset": 0, 00:19:43.712 "data_size": 65536 00:19:43.712 } 00:19:43.712 ] 00:19:43.712 }' 00:19:43.712 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.712 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.970 [2024-11-20 05:30:15.613993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.970 "name": "Existed_Raid", 00:19:43.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.970 "strip_size_kb": 0, 00:19:43.970 "state": "configuring", 00:19:43.970 "raid_level": "raid1", 00:19:43.970 "superblock": false, 00:19:43.970 "num_base_bdevs": 4, 00:19:43.970 "num_base_bdevs_discovered": 2, 00:19:43.970 "num_base_bdevs_operational": 4, 00:19:43.970 "base_bdevs_list": [ 00:19:43.970 { 00:19:43.970 "name": null, 00:19:43.970 "uuid": "9ddb18f1-d69b-4823-85ac-4d011854678f", 00:19:43.970 "is_configured": false, 00:19:43.970 "data_offset": 0, 00:19:43.970 "data_size": 65536 00:19:43.970 }, 00:19:43.970 { 00:19:43.970 "name": null, 00:19:43.970 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:43.970 "is_configured": false, 00:19:43.970 "data_offset": 0, 00:19:43.970 "data_size": 65536 00:19:43.970 }, 00:19:43.970 { 00:19:43.970 "name": "BaseBdev3", 00:19:43.970 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:43.970 "is_configured": true, 00:19:43.970 "data_offset": 0, 00:19:43.970 "data_size": 65536 00:19:43.970 }, 00:19:43.970 { 00:19:43.970 "name": "BaseBdev4", 00:19:43.970 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:43.970 "is_configured": true, 00:19:43.970 "data_offset": 0, 00:19:43.970 "data_size": 65536 00:19:43.970 } 00:19:43.970 ] 00:19:43.970 }' 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.970 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.227 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.227 05:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:44.227 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.227 05:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.227 [2024-11-20 05:30:16.030854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.227 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.483 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.483 "name": "Existed_Raid", 00:19:44.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.483 "strip_size_kb": 0, 00:19:44.483 "state": "configuring", 00:19:44.483 "raid_level": "raid1", 00:19:44.483 "superblock": false, 00:19:44.483 "num_base_bdevs": 4, 00:19:44.483 "num_base_bdevs_discovered": 3, 00:19:44.483 "num_base_bdevs_operational": 4, 00:19:44.483 "base_bdevs_list": [ 00:19:44.483 { 00:19:44.483 "name": null, 00:19:44.483 "uuid": "9ddb18f1-d69b-4823-85ac-4d011854678f", 00:19:44.483 "is_configured": false, 00:19:44.483 "data_offset": 0, 00:19:44.483 "data_size": 65536 00:19:44.483 }, 00:19:44.483 { 00:19:44.483 "name": "BaseBdev2", 00:19:44.483 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:44.483 "is_configured": true, 00:19:44.483 "data_offset": 0, 00:19:44.483 "data_size": 65536 00:19:44.483 }, 00:19:44.483 { 00:19:44.483 "name": "BaseBdev3", 00:19:44.483 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:44.483 "is_configured": true, 00:19:44.483 "data_offset": 0, 00:19:44.483 "data_size": 65536 00:19:44.483 }, 00:19:44.483 { 00:19:44.483 "name": "BaseBdev4", 00:19:44.483 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:44.483 "is_configured": true, 00:19:44.483 "data_offset": 0, 00:19:44.483 "data_size": 65536 00:19:44.483 } 00:19:44.483 ] 00:19:44.483 }' 00:19:44.483 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.483 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9ddb18f1-d69b-4823-85ac-4d011854678f 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.740 [2024-11-20 05:30:16.455215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:44.740 [2024-11-20 05:30:16.455256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:44.740 [2024-11-20 05:30:16.455266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:44.740 [2024-11-20 05:30:16.455509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:44.740 [2024-11-20 05:30:16.455636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:44.740 [2024-11-20 05:30:16.455644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:44.740 [2024-11-20 05:30:16.455853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.740 NewBaseBdev 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.740 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.740 [ 00:19:44.740 { 00:19:44.740 "name": "NewBaseBdev", 00:19:44.740 "aliases": [ 00:19:44.740 "9ddb18f1-d69b-4823-85ac-4d011854678f" 00:19:44.740 ], 00:19:44.740 "product_name": "Malloc disk", 00:19:44.740 "block_size": 512, 00:19:44.740 "num_blocks": 65536, 00:19:44.740 "uuid": "9ddb18f1-d69b-4823-85ac-4d011854678f", 00:19:44.740 "assigned_rate_limits": { 00:19:44.740 "rw_ios_per_sec": 0, 00:19:44.740 "rw_mbytes_per_sec": 0, 00:19:44.740 "r_mbytes_per_sec": 0, 00:19:44.740 "w_mbytes_per_sec": 0 00:19:44.740 }, 00:19:44.740 "claimed": true, 00:19:44.740 "claim_type": "exclusive_write", 00:19:44.740 "zoned": false, 00:19:44.740 "supported_io_types": { 00:19:44.740 "read": true, 00:19:44.740 "write": true, 00:19:44.740 "unmap": true, 00:19:44.740 "flush": true, 00:19:44.740 "reset": true, 00:19:44.740 "nvme_admin": false, 00:19:44.740 "nvme_io": false, 00:19:44.740 "nvme_io_md": false, 00:19:44.741 "write_zeroes": true, 00:19:44.741 "zcopy": true, 00:19:44.741 "get_zone_info": false, 00:19:44.741 "zone_management": false, 00:19:44.741 "zone_append": false, 00:19:44.741 "compare": false, 00:19:44.741 "compare_and_write": false, 00:19:44.741 "abort": true, 00:19:44.741 "seek_hole": false, 00:19:44.741 "seek_data": false, 00:19:44.741 "copy": true, 00:19:44.741 "nvme_iov_md": false 00:19:44.741 }, 00:19:44.741 "memory_domains": [ 00:19:44.741 { 00:19:44.741 "dma_device_id": "system", 00:19:44.741 "dma_device_type": 1 00:19:44.741 }, 00:19:44.741 { 00:19:44.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.741 "dma_device_type": 2 00:19:44.741 } 00:19:44.741 ], 00:19:44.741 "driver_specific": {} 00:19:44.741 } 00:19:44.741 ] 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.741 "name": "Existed_Raid", 00:19:44.741 "uuid": "7a242bc7-d715-4360-96a1-9783f53246b1", 00:19:44.741 "strip_size_kb": 0, 00:19:44.741 "state": "online", 00:19:44.741 "raid_level": "raid1", 00:19:44.741 "superblock": false, 00:19:44.741 "num_base_bdevs": 4, 00:19:44.741 "num_base_bdevs_discovered": 4, 00:19:44.741 "num_base_bdevs_operational": 4, 00:19:44.741 "base_bdevs_list": [ 00:19:44.741 { 00:19:44.741 "name": "NewBaseBdev", 00:19:44.741 "uuid": "9ddb18f1-d69b-4823-85ac-4d011854678f", 00:19:44.741 "is_configured": true, 00:19:44.741 "data_offset": 0, 00:19:44.741 "data_size": 65536 00:19:44.741 }, 00:19:44.741 { 00:19:44.741 "name": "BaseBdev2", 00:19:44.741 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:44.741 "is_configured": true, 00:19:44.741 "data_offset": 0, 00:19:44.741 "data_size": 65536 00:19:44.741 }, 00:19:44.741 { 00:19:44.741 "name": "BaseBdev3", 00:19:44.741 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:44.741 "is_configured": true, 00:19:44.741 "data_offset": 0, 00:19:44.741 "data_size": 65536 00:19:44.741 }, 00:19:44.741 { 00:19:44.741 "name": "BaseBdev4", 00:19:44.741 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:44.741 "is_configured": true, 00:19:44.741 "data_offset": 0, 00:19:44.741 "data_size": 65536 00:19:44.741 } 00:19:44.741 ] 00:19:44.741 }' 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.741 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.999 [2024-11-20 05:30:16.799645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.999 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:44.999 "name": "Existed_Raid", 00:19:44.999 "aliases": [ 00:19:44.999 "7a242bc7-d715-4360-96a1-9783f53246b1" 00:19:44.999 ], 00:19:44.999 "product_name": "Raid Volume", 00:19:44.999 "block_size": 512, 00:19:44.999 "num_blocks": 65536, 00:19:44.999 "uuid": "7a242bc7-d715-4360-96a1-9783f53246b1", 00:19:44.999 "assigned_rate_limits": { 00:19:44.999 "rw_ios_per_sec": 0, 00:19:44.999 "rw_mbytes_per_sec": 0, 00:19:44.999 "r_mbytes_per_sec": 0, 00:19:44.999 "w_mbytes_per_sec": 0 00:19:44.999 }, 00:19:44.999 "claimed": false, 00:19:44.999 "zoned": false, 00:19:44.999 "supported_io_types": { 00:19:44.999 "read": true, 00:19:44.999 "write": true, 00:19:44.999 "unmap": false, 00:19:44.999 "flush": false, 00:19:44.999 "reset": true, 00:19:44.999 "nvme_admin": false, 00:19:44.999 "nvme_io": false, 00:19:44.999 "nvme_io_md": false, 00:19:44.999 "write_zeroes": true, 00:19:44.999 "zcopy": false, 00:19:44.999 "get_zone_info": false, 00:19:44.999 "zone_management": false, 00:19:44.999 "zone_append": false, 00:19:44.999 "compare": false, 00:19:44.999 "compare_and_write": false, 00:19:44.999 "abort": false, 00:19:44.999 "seek_hole": false, 00:19:44.999 "seek_data": false, 00:19:44.999 "copy": false, 00:19:44.999 "nvme_iov_md": false 00:19:44.999 }, 00:19:44.999 "memory_domains": [ 00:19:44.999 { 00:19:44.999 "dma_device_id": "system", 00:19:44.999 "dma_device_type": 1 00:19:44.999 }, 00:19:44.999 { 00:19:44.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.999 "dma_device_type": 2 00:19:44.999 }, 00:19:44.999 { 00:19:44.999 "dma_device_id": "system", 00:19:44.999 "dma_device_type": 1 00:19:44.999 }, 00:19:44.999 { 00:19:44.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.999 "dma_device_type": 2 00:19:44.999 }, 00:19:44.999 { 00:19:44.999 "dma_device_id": "system", 00:19:44.999 "dma_device_type": 1 00:19:44.999 }, 00:19:44.999 { 00:19:44.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.999 "dma_device_type": 2 00:19:44.999 }, 00:19:44.999 { 00:19:44.999 "dma_device_id": "system", 00:19:44.999 "dma_device_type": 1 00:19:44.999 }, 00:19:44.999 { 00:19:44.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.999 "dma_device_type": 2 00:19:44.999 } 00:19:44.999 ], 00:19:44.999 "driver_specific": { 00:19:44.999 "raid": { 00:19:44.999 "uuid": "7a242bc7-d715-4360-96a1-9783f53246b1", 00:19:44.999 "strip_size_kb": 0, 00:19:44.999 "state": "online", 00:19:44.999 "raid_level": "raid1", 00:19:44.999 "superblock": false, 00:19:44.999 "num_base_bdevs": 4, 00:19:44.999 "num_base_bdevs_discovered": 4, 00:19:44.999 "num_base_bdevs_operational": 4, 00:19:44.999 "base_bdevs_list": [ 00:19:44.999 { 00:19:44.999 "name": "NewBaseBdev", 00:19:44.999 "uuid": "9ddb18f1-d69b-4823-85ac-4d011854678f", 00:19:44.999 "is_configured": true, 00:19:44.999 "data_offset": 0, 00:19:44.999 "data_size": 65536 00:19:44.999 }, 00:19:44.999 { 00:19:44.999 "name": "BaseBdev2", 00:19:44.999 "uuid": "e24ad8a2-9b16-4a6f-ae2b-227c7f27632e", 00:19:44.999 "is_configured": true, 00:19:44.999 "data_offset": 0, 00:19:44.999 "data_size": 65536 00:19:44.999 }, 00:19:44.999 { 00:19:44.999 "name": "BaseBdev3", 00:19:44.999 "uuid": "66667755-66a8-41dc-93c0-2a4fddad784b", 00:19:44.999 "is_configured": true, 00:19:44.999 "data_offset": 0, 00:19:45.000 "data_size": 65536 00:19:45.000 }, 00:19:45.000 { 00:19:45.000 "name": "BaseBdev4", 00:19:45.000 "uuid": "72ff51f3-ce4a-4588-91b0-097b609aad31", 00:19:45.000 "is_configured": true, 00:19:45.000 "data_offset": 0, 00:19:45.000 "data_size": 65536 00:19:45.000 } 00:19:45.000 ] 00:19:45.000 } 00:19:45.000 } 00:19:45.000 }' 00:19:45.000 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:45.257 BaseBdev2 00:19:45.257 BaseBdev3 00:19:45.257 BaseBdev4' 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:45.257 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.258 05:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.258 [2024-11-20 05:30:17.023336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:45.258 [2024-11-20 05:30:17.023361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.258 [2024-11-20 05:30:17.023447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.258 [2024-11-20 05:30:17.023699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.258 [2024-11-20 05:30:17.023709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71323 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71323 ']' 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71323 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71323 00:19:45.258 killing process with pid 71323 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71323' 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71323 00:19:45.258 [2024-11-20 05:30:17.056803] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.258 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71323 00:19:45.564 [2024-11-20 05:30:17.257339] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:46.168 00:19:46.168 real 0m8.180s 00:19:46.168 user 0m13.008s 00:19:46.168 sys 0m1.440s 00:19:46.168 ************************************ 00:19:46.168 END TEST raid_state_function_test 00:19:46.168 ************************************ 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.168 05:30:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:46.168 05:30:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:46.168 05:30:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:46.168 05:30:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.168 ************************************ 00:19:46.168 START TEST raid_state_function_test_sb 00:19:46.168 ************************************ 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:46.168 Process raid pid: 71961 00:19:46.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71961 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71961' 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71961 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 71961 ']' 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.168 05:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:46.168 [2024-11-20 05:30:17.976967] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:46.168 [2024-11-20 05:30:17.977087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.427 [2024-11-20 05:30:18.135848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.427 [2024-11-20 05:30:18.252400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.686 [2024-11-20 05:30:18.400087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.686 [2024-11-20 05:30:18.400125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.252 [2024-11-20 05:30:18.862230] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:47.252 [2024-11-20 05:30:18.862290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:47.252 [2024-11-20 05:30:18.862300] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.252 [2024-11-20 05:30:18.862311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.252 [2024-11-20 05:30:18.862318] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:47.252 [2024-11-20 05:30:18.862327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:47.252 [2024-11-20 05:30:18.862338] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:47.252 [2024-11-20 05:30:18.862347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.252 "name": "Existed_Raid", 00:19:47.252 "uuid": "319de6e6-dd98-4420-819f-c5cab487024e", 00:19:47.252 "strip_size_kb": 0, 00:19:47.252 "state": "configuring", 00:19:47.252 "raid_level": "raid1", 00:19:47.252 "superblock": true, 00:19:47.252 "num_base_bdevs": 4, 00:19:47.252 "num_base_bdevs_discovered": 0, 00:19:47.252 "num_base_bdevs_operational": 4, 00:19:47.252 "base_bdevs_list": [ 00:19:47.252 { 00:19:47.252 "name": "BaseBdev1", 00:19:47.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.252 "is_configured": false, 00:19:47.252 "data_offset": 0, 00:19:47.252 "data_size": 0 00:19:47.252 }, 00:19:47.252 { 00:19:47.252 "name": "BaseBdev2", 00:19:47.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.252 "is_configured": false, 00:19:47.252 "data_offset": 0, 00:19:47.252 "data_size": 0 00:19:47.252 }, 00:19:47.252 { 00:19:47.252 "name": "BaseBdev3", 00:19:47.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.252 "is_configured": false, 00:19:47.252 "data_offset": 0, 00:19:47.252 "data_size": 0 00:19:47.252 }, 00:19:47.252 { 00:19:47.252 "name": "BaseBdev4", 00:19:47.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.252 "is_configured": false, 00:19:47.252 "data_offset": 0, 00:19:47.252 "data_size": 0 00:19:47.252 } 00:19:47.252 ] 00:19:47.252 }' 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.252 05:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.509 [2024-11-20 05:30:19.162242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:47.509 [2024-11-20 05:30:19.162285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.509 [2024-11-20 05:30:19.170236] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:47.509 [2024-11-20 05:30:19.170279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:47.509 [2024-11-20 05:30:19.170288] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.509 [2024-11-20 05:30:19.170297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.509 [2024-11-20 05:30:19.170303] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:47.509 [2024-11-20 05:30:19.170312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:47.509 [2024-11-20 05:30:19.170318] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:47.509 [2024-11-20 05:30:19.170327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.509 [2024-11-20 05:30:19.204821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.509 BaseBdev1 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:47.509 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.510 [ 00:19:47.510 { 00:19:47.510 "name": "BaseBdev1", 00:19:47.510 "aliases": [ 00:19:47.510 "90fcb346-99c6-427d-b568-206a7e53999a" 00:19:47.510 ], 00:19:47.510 "product_name": "Malloc disk", 00:19:47.510 "block_size": 512, 00:19:47.510 "num_blocks": 65536, 00:19:47.510 "uuid": "90fcb346-99c6-427d-b568-206a7e53999a", 00:19:47.510 "assigned_rate_limits": { 00:19:47.510 "rw_ios_per_sec": 0, 00:19:47.510 "rw_mbytes_per_sec": 0, 00:19:47.510 "r_mbytes_per_sec": 0, 00:19:47.510 "w_mbytes_per_sec": 0 00:19:47.510 }, 00:19:47.510 "claimed": true, 00:19:47.510 "claim_type": "exclusive_write", 00:19:47.510 "zoned": false, 00:19:47.510 "supported_io_types": { 00:19:47.510 "read": true, 00:19:47.510 "write": true, 00:19:47.510 "unmap": true, 00:19:47.510 "flush": true, 00:19:47.510 "reset": true, 00:19:47.510 "nvme_admin": false, 00:19:47.510 "nvme_io": false, 00:19:47.510 "nvme_io_md": false, 00:19:47.510 "write_zeroes": true, 00:19:47.510 "zcopy": true, 00:19:47.510 "get_zone_info": false, 00:19:47.510 "zone_management": false, 00:19:47.510 "zone_append": false, 00:19:47.510 "compare": false, 00:19:47.510 "compare_and_write": false, 00:19:47.510 "abort": true, 00:19:47.510 "seek_hole": false, 00:19:47.510 "seek_data": false, 00:19:47.510 "copy": true, 00:19:47.510 "nvme_iov_md": false 00:19:47.510 }, 00:19:47.510 "memory_domains": [ 00:19:47.510 { 00:19:47.510 "dma_device_id": "system", 00:19:47.510 "dma_device_type": 1 00:19:47.510 }, 00:19:47.510 { 00:19:47.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.510 "dma_device_type": 2 00:19:47.510 } 00:19:47.510 ], 00:19:47.510 "driver_specific": {} 00:19:47.510 } 00:19:47.510 ] 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.510 "name": "Existed_Raid", 00:19:47.510 "uuid": "6333ad83-b610-4783-814d-40d3f3a57c6e", 00:19:47.510 "strip_size_kb": 0, 00:19:47.510 "state": "configuring", 00:19:47.510 "raid_level": "raid1", 00:19:47.510 "superblock": true, 00:19:47.510 "num_base_bdevs": 4, 00:19:47.510 "num_base_bdevs_discovered": 1, 00:19:47.510 "num_base_bdevs_operational": 4, 00:19:47.510 "base_bdevs_list": [ 00:19:47.510 { 00:19:47.510 "name": "BaseBdev1", 00:19:47.510 "uuid": "90fcb346-99c6-427d-b568-206a7e53999a", 00:19:47.510 "is_configured": true, 00:19:47.510 "data_offset": 2048, 00:19:47.510 "data_size": 63488 00:19:47.510 }, 00:19:47.510 { 00:19:47.510 "name": "BaseBdev2", 00:19:47.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.510 "is_configured": false, 00:19:47.510 "data_offset": 0, 00:19:47.510 "data_size": 0 00:19:47.510 }, 00:19:47.510 { 00:19:47.510 "name": "BaseBdev3", 00:19:47.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.510 "is_configured": false, 00:19:47.510 "data_offset": 0, 00:19:47.510 "data_size": 0 00:19:47.510 }, 00:19:47.510 { 00:19:47.510 "name": "BaseBdev4", 00:19:47.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.510 "is_configured": false, 00:19:47.510 "data_offset": 0, 00:19:47.510 "data_size": 0 00:19:47.510 } 00:19:47.510 ] 00:19:47.510 }' 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.510 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.767 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:47.767 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.767 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.767 [2024-11-20 05:30:19.524957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:47.767 [2024-11-20 05:30:19.525022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:47.767 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.767 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:47.767 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.767 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.767 [2024-11-20 05:30:19.533010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.767 [2024-11-20 05:30:19.534982] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.768 [2024-11-20 05:30:19.535027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.768 [2024-11-20 05:30:19.535037] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:47.768 [2024-11-20 05:30:19.535048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:47.768 [2024-11-20 05:30:19.535055] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:47.768 [2024-11-20 05:30:19.535064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.768 "name": "Existed_Raid", 00:19:47.768 "uuid": "4f54f63f-dd94-4d6e-af3f-75578171c2ca", 00:19:47.768 "strip_size_kb": 0, 00:19:47.768 "state": "configuring", 00:19:47.768 "raid_level": "raid1", 00:19:47.768 "superblock": true, 00:19:47.768 "num_base_bdevs": 4, 00:19:47.768 "num_base_bdevs_discovered": 1, 00:19:47.768 "num_base_bdevs_operational": 4, 00:19:47.768 "base_bdevs_list": [ 00:19:47.768 { 00:19:47.768 "name": "BaseBdev1", 00:19:47.768 "uuid": "90fcb346-99c6-427d-b568-206a7e53999a", 00:19:47.768 "is_configured": true, 00:19:47.768 "data_offset": 2048, 00:19:47.768 "data_size": 63488 00:19:47.768 }, 00:19:47.768 { 00:19:47.768 "name": "BaseBdev2", 00:19:47.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.768 "is_configured": false, 00:19:47.768 "data_offset": 0, 00:19:47.768 "data_size": 0 00:19:47.768 }, 00:19:47.768 { 00:19:47.768 "name": "BaseBdev3", 00:19:47.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.768 "is_configured": false, 00:19:47.768 "data_offset": 0, 00:19:47.768 "data_size": 0 00:19:47.768 }, 00:19:47.768 { 00:19:47.768 "name": "BaseBdev4", 00:19:47.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.768 "is_configured": false, 00:19:47.768 "data_offset": 0, 00:19:47.768 "data_size": 0 00:19:47.768 } 00:19:47.768 ] 00:19:47.768 }' 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.768 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.333 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:48.333 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.333 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.333 [2024-11-20 05:30:19.889509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:48.333 BaseBdev2 00:19:48.333 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.334 [ 00:19:48.334 { 00:19:48.334 "name": "BaseBdev2", 00:19:48.334 "aliases": [ 00:19:48.334 "9b2133f2-ca57-4edb-8d45-ded67a43ad08" 00:19:48.334 ], 00:19:48.334 "product_name": "Malloc disk", 00:19:48.334 "block_size": 512, 00:19:48.334 "num_blocks": 65536, 00:19:48.334 "uuid": "9b2133f2-ca57-4edb-8d45-ded67a43ad08", 00:19:48.334 "assigned_rate_limits": { 00:19:48.334 "rw_ios_per_sec": 0, 00:19:48.334 "rw_mbytes_per_sec": 0, 00:19:48.334 "r_mbytes_per_sec": 0, 00:19:48.334 "w_mbytes_per_sec": 0 00:19:48.334 }, 00:19:48.334 "claimed": true, 00:19:48.334 "claim_type": "exclusive_write", 00:19:48.334 "zoned": false, 00:19:48.334 "supported_io_types": { 00:19:48.334 "read": true, 00:19:48.334 "write": true, 00:19:48.334 "unmap": true, 00:19:48.334 "flush": true, 00:19:48.334 "reset": true, 00:19:48.334 "nvme_admin": false, 00:19:48.334 "nvme_io": false, 00:19:48.334 "nvme_io_md": false, 00:19:48.334 "write_zeroes": true, 00:19:48.334 "zcopy": true, 00:19:48.334 "get_zone_info": false, 00:19:48.334 "zone_management": false, 00:19:48.334 "zone_append": false, 00:19:48.334 "compare": false, 00:19:48.334 "compare_and_write": false, 00:19:48.334 "abort": true, 00:19:48.334 "seek_hole": false, 00:19:48.334 "seek_data": false, 00:19:48.334 "copy": true, 00:19:48.334 "nvme_iov_md": false 00:19:48.334 }, 00:19:48.334 "memory_domains": [ 00:19:48.334 { 00:19:48.334 "dma_device_id": "system", 00:19:48.334 "dma_device_type": 1 00:19:48.334 }, 00:19:48.334 { 00:19:48.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.334 "dma_device_type": 2 00:19:48.334 } 00:19:48.334 ], 00:19:48.334 "driver_specific": {} 00:19:48.334 } 00:19:48.334 ] 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.334 "name": "Existed_Raid", 00:19:48.334 "uuid": "4f54f63f-dd94-4d6e-af3f-75578171c2ca", 00:19:48.334 "strip_size_kb": 0, 00:19:48.334 "state": "configuring", 00:19:48.334 "raid_level": "raid1", 00:19:48.334 "superblock": true, 00:19:48.334 "num_base_bdevs": 4, 00:19:48.334 "num_base_bdevs_discovered": 2, 00:19:48.334 "num_base_bdevs_operational": 4, 00:19:48.334 "base_bdevs_list": [ 00:19:48.334 { 00:19:48.334 "name": "BaseBdev1", 00:19:48.334 "uuid": "90fcb346-99c6-427d-b568-206a7e53999a", 00:19:48.334 "is_configured": true, 00:19:48.334 "data_offset": 2048, 00:19:48.334 "data_size": 63488 00:19:48.334 }, 00:19:48.334 { 00:19:48.334 "name": "BaseBdev2", 00:19:48.334 "uuid": "9b2133f2-ca57-4edb-8d45-ded67a43ad08", 00:19:48.334 "is_configured": true, 00:19:48.334 "data_offset": 2048, 00:19:48.334 "data_size": 63488 00:19:48.334 }, 00:19:48.334 { 00:19:48.334 "name": "BaseBdev3", 00:19:48.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.334 "is_configured": false, 00:19:48.334 "data_offset": 0, 00:19:48.334 "data_size": 0 00:19:48.334 }, 00:19:48.334 { 00:19:48.334 "name": "BaseBdev4", 00:19:48.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.334 "is_configured": false, 00:19:48.334 "data_offset": 0, 00:19:48.334 "data_size": 0 00:19:48.334 } 00:19:48.334 ] 00:19:48.334 }' 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.334 05:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.591 [2024-11-20 05:30:20.302392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:48.591 BaseBdev3 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.591 [ 00:19:48.591 { 00:19:48.591 "name": "BaseBdev3", 00:19:48.591 "aliases": [ 00:19:48.591 "a876ff34-1a6e-4cb8-8521-f82064804759" 00:19:48.591 ], 00:19:48.591 "product_name": "Malloc disk", 00:19:48.591 "block_size": 512, 00:19:48.591 "num_blocks": 65536, 00:19:48.591 "uuid": "a876ff34-1a6e-4cb8-8521-f82064804759", 00:19:48.591 "assigned_rate_limits": { 00:19:48.591 "rw_ios_per_sec": 0, 00:19:48.591 "rw_mbytes_per_sec": 0, 00:19:48.591 "r_mbytes_per_sec": 0, 00:19:48.591 "w_mbytes_per_sec": 0 00:19:48.591 }, 00:19:48.591 "claimed": true, 00:19:48.591 "claim_type": "exclusive_write", 00:19:48.591 "zoned": false, 00:19:48.591 "supported_io_types": { 00:19:48.591 "read": true, 00:19:48.591 "write": true, 00:19:48.591 "unmap": true, 00:19:48.591 "flush": true, 00:19:48.591 "reset": true, 00:19:48.591 "nvme_admin": false, 00:19:48.591 "nvme_io": false, 00:19:48.591 "nvme_io_md": false, 00:19:48.591 "write_zeroes": true, 00:19:48.591 "zcopy": true, 00:19:48.591 "get_zone_info": false, 00:19:48.591 "zone_management": false, 00:19:48.591 "zone_append": false, 00:19:48.591 "compare": false, 00:19:48.591 "compare_and_write": false, 00:19:48.591 "abort": true, 00:19:48.591 "seek_hole": false, 00:19:48.591 "seek_data": false, 00:19:48.591 "copy": true, 00:19:48.591 "nvme_iov_md": false 00:19:48.591 }, 00:19:48.591 "memory_domains": [ 00:19:48.591 { 00:19:48.591 "dma_device_id": "system", 00:19:48.591 "dma_device_type": 1 00:19:48.591 }, 00:19:48.591 { 00:19:48.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.591 "dma_device_type": 2 00:19:48.591 } 00:19:48.591 ], 00:19:48.591 "driver_specific": {} 00:19:48.591 } 00:19:48.591 ] 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.591 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.592 "name": "Existed_Raid", 00:19:48.592 "uuid": "4f54f63f-dd94-4d6e-af3f-75578171c2ca", 00:19:48.592 "strip_size_kb": 0, 00:19:48.592 "state": "configuring", 00:19:48.592 "raid_level": "raid1", 00:19:48.592 "superblock": true, 00:19:48.592 "num_base_bdevs": 4, 00:19:48.592 "num_base_bdevs_discovered": 3, 00:19:48.592 "num_base_bdevs_operational": 4, 00:19:48.592 "base_bdevs_list": [ 00:19:48.592 { 00:19:48.592 "name": "BaseBdev1", 00:19:48.592 "uuid": "90fcb346-99c6-427d-b568-206a7e53999a", 00:19:48.592 "is_configured": true, 00:19:48.592 "data_offset": 2048, 00:19:48.592 "data_size": 63488 00:19:48.592 }, 00:19:48.592 { 00:19:48.592 "name": "BaseBdev2", 00:19:48.592 "uuid": "9b2133f2-ca57-4edb-8d45-ded67a43ad08", 00:19:48.592 "is_configured": true, 00:19:48.592 "data_offset": 2048, 00:19:48.592 "data_size": 63488 00:19:48.592 }, 00:19:48.592 { 00:19:48.592 "name": "BaseBdev3", 00:19:48.592 "uuid": "a876ff34-1a6e-4cb8-8521-f82064804759", 00:19:48.592 "is_configured": true, 00:19:48.592 "data_offset": 2048, 00:19:48.592 "data_size": 63488 00:19:48.592 }, 00:19:48.592 { 00:19:48.592 "name": "BaseBdev4", 00:19:48.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.592 "is_configured": false, 00:19:48.592 "data_offset": 0, 00:19:48.592 "data_size": 0 00:19:48.592 } 00:19:48.592 ] 00:19:48.592 }' 00:19:48.592 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.592 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.912 [2024-11-20 05:30:20.683065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:48.912 [2024-11-20 05:30:20.683536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:48.912 [2024-11-20 05:30:20.683631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:48.912 BaseBdev4 00:19:48.912 [2024-11-20 05:30:20.683954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:48.912 [2024-11-20 05:30:20.684115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:48.912 [2024-11-20 05:30:20.684132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:48.912 [2024-11-20 05:30:20.684267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.912 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.188 [ 00:19:49.188 { 00:19:49.188 "name": "BaseBdev4", 00:19:49.188 "aliases": [ 00:19:49.189 "aa21a54e-729e-4138-b953-859803b53cb3" 00:19:49.189 ], 00:19:49.189 "product_name": "Malloc disk", 00:19:49.189 "block_size": 512, 00:19:49.189 "num_blocks": 65536, 00:19:49.189 "uuid": "aa21a54e-729e-4138-b953-859803b53cb3", 00:19:49.189 "assigned_rate_limits": { 00:19:49.189 "rw_ios_per_sec": 0, 00:19:49.189 "rw_mbytes_per_sec": 0, 00:19:49.189 "r_mbytes_per_sec": 0, 00:19:49.189 "w_mbytes_per_sec": 0 00:19:49.189 }, 00:19:49.189 "claimed": true, 00:19:49.189 "claim_type": "exclusive_write", 00:19:49.189 "zoned": false, 00:19:49.189 "supported_io_types": { 00:19:49.189 "read": true, 00:19:49.189 "write": true, 00:19:49.189 "unmap": true, 00:19:49.189 "flush": true, 00:19:49.189 "reset": true, 00:19:49.189 "nvme_admin": false, 00:19:49.189 "nvme_io": false, 00:19:49.189 "nvme_io_md": false, 00:19:49.189 "write_zeroes": true, 00:19:49.189 "zcopy": true, 00:19:49.189 "get_zone_info": false, 00:19:49.189 "zone_management": false, 00:19:49.189 "zone_append": false, 00:19:49.189 "compare": false, 00:19:49.189 "compare_and_write": false, 00:19:49.189 "abort": true, 00:19:49.189 "seek_hole": false, 00:19:49.189 "seek_data": false, 00:19:49.189 "copy": true, 00:19:49.189 "nvme_iov_md": false 00:19:49.189 }, 00:19:49.189 "memory_domains": [ 00:19:49.189 { 00:19:49.189 "dma_device_id": "system", 00:19:49.189 "dma_device_type": 1 00:19:49.189 }, 00:19:49.189 { 00:19:49.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.189 "dma_device_type": 2 00:19:49.189 } 00:19:49.189 ], 00:19:49.189 "driver_specific": {} 00:19:49.189 } 00:19:49.189 ] 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.189 "name": "Existed_Raid", 00:19:49.189 "uuid": "4f54f63f-dd94-4d6e-af3f-75578171c2ca", 00:19:49.189 "strip_size_kb": 0, 00:19:49.189 "state": "online", 00:19:49.189 "raid_level": "raid1", 00:19:49.189 "superblock": true, 00:19:49.189 "num_base_bdevs": 4, 00:19:49.189 "num_base_bdevs_discovered": 4, 00:19:49.189 "num_base_bdevs_operational": 4, 00:19:49.189 "base_bdevs_list": [ 00:19:49.189 { 00:19:49.189 "name": "BaseBdev1", 00:19:49.189 "uuid": "90fcb346-99c6-427d-b568-206a7e53999a", 00:19:49.189 "is_configured": true, 00:19:49.189 "data_offset": 2048, 00:19:49.189 "data_size": 63488 00:19:49.189 }, 00:19:49.189 { 00:19:49.189 "name": "BaseBdev2", 00:19:49.189 "uuid": "9b2133f2-ca57-4edb-8d45-ded67a43ad08", 00:19:49.189 "is_configured": true, 00:19:49.189 "data_offset": 2048, 00:19:49.189 "data_size": 63488 00:19:49.189 }, 00:19:49.189 { 00:19:49.189 "name": "BaseBdev3", 00:19:49.189 "uuid": "a876ff34-1a6e-4cb8-8521-f82064804759", 00:19:49.189 "is_configured": true, 00:19:49.189 "data_offset": 2048, 00:19:49.189 "data_size": 63488 00:19:49.189 }, 00:19:49.189 { 00:19:49.189 "name": "BaseBdev4", 00:19:49.189 "uuid": "aa21a54e-729e-4138-b953-859803b53cb3", 00:19:49.189 "is_configured": true, 00:19:49.189 "data_offset": 2048, 00:19:49.189 "data_size": 63488 00:19:49.189 } 00:19:49.189 ] 00:19:49.189 }' 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.189 05:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.446 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.447 [2024-11-20 05:30:21.067612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:49.447 "name": "Existed_Raid", 00:19:49.447 "aliases": [ 00:19:49.447 "4f54f63f-dd94-4d6e-af3f-75578171c2ca" 00:19:49.447 ], 00:19:49.447 "product_name": "Raid Volume", 00:19:49.447 "block_size": 512, 00:19:49.447 "num_blocks": 63488, 00:19:49.447 "uuid": "4f54f63f-dd94-4d6e-af3f-75578171c2ca", 00:19:49.447 "assigned_rate_limits": { 00:19:49.447 "rw_ios_per_sec": 0, 00:19:49.447 "rw_mbytes_per_sec": 0, 00:19:49.447 "r_mbytes_per_sec": 0, 00:19:49.447 "w_mbytes_per_sec": 0 00:19:49.447 }, 00:19:49.447 "claimed": false, 00:19:49.447 "zoned": false, 00:19:49.447 "supported_io_types": { 00:19:49.447 "read": true, 00:19:49.447 "write": true, 00:19:49.447 "unmap": false, 00:19:49.447 "flush": false, 00:19:49.447 "reset": true, 00:19:49.447 "nvme_admin": false, 00:19:49.447 "nvme_io": false, 00:19:49.447 "nvme_io_md": false, 00:19:49.447 "write_zeroes": true, 00:19:49.447 "zcopy": false, 00:19:49.447 "get_zone_info": false, 00:19:49.447 "zone_management": false, 00:19:49.447 "zone_append": false, 00:19:49.447 "compare": false, 00:19:49.447 "compare_and_write": false, 00:19:49.447 "abort": false, 00:19:49.447 "seek_hole": false, 00:19:49.447 "seek_data": false, 00:19:49.447 "copy": false, 00:19:49.447 "nvme_iov_md": false 00:19:49.447 }, 00:19:49.447 "memory_domains": [ 00:19:49.447 { 00:19:49.447 "dma_device_id": "system", 00:19:49.447 "dma_device_type": 1 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.447 "dma_device_type": 2 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "dma_device_id": "system", 00:19:49.447 "dma_device_type": 1 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.447 "dma_device_type": 2 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "dma_device_id": "system", 00:19:49.447 "dma_device_type": 1 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.447 "dma_device_type": 2 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "dma_device_id": "system", 00:19:49.447 "dma_device_type": 1 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.447 "dma_device_type": 2 00:19:49.447 } 00:19:49.447 ], 00:19:49.447 "driver_specific": { 00:19:49.447 "raid": { 00:19:49.447 "uuid": "4f54f63f-dd94-4d6e-af3f-75578171c2ca", 00:19:49.447 "strip_size_kb": 0, 00:19:49.447 "state": "online", 00:19:49.447 "raid_level": "raid1", 00:19:49.447 "superblock": true, 00:19:49.447 "num_base_bdevs": 4, 00:19:49.447 "num_base_bdevs_discovered": 4, 00:19:49.447 "num_base_bdevs_operational": 4, 00:19:49.447 "base_bdevs_list": [ 00:19:49.447 { 00:19:49.447 "name": "BaseBdev1", 00:19:49.447 "uuid": "90fcb346-99c6-427d-b568-206a7e53999a", 00:19:49.447 "is_configured": true, 00:19:49.447 "data_offset": 2048, 00:19:49.447 "data_size": 63488 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "name": "BaseBdev2", 00:19:49.447 "uuid": "9b2133f2-ca57-4edb-8d45-ded67a43ad08", 00:19:49.447 "is_configured": true, 00:19:49.447 "data_offset": 2048, 00:19:49.447 "data_size": 63488 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "name": "BaseBdev3", 00:19:49.447 "uuid": "a876ff34-1a6e-4cb8-8521-f82064804759", 00:19:49.447 "is_configured": true, 00:19:49.447 "data_offset": 2048, 00:19:49.447 "data_size": 63488 00:19:49.447 }, 00:19:49.447 { 00:19:49.447 "name": "BaseBdev4", 00:19:49.447 "uuid": "aa21a54e-729e-4138-b953-859803b53cb3", 00:19:49.447 "is_configured": true, 00:19:49.447 "data_offset": 2048, 00:19:49.447 "data_size": 63488 00:19:49.447 } 00:19:49.447 ] 00:19:49.447 } 00:19:49.447 } 00:19:49.447 }' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:49.447 BaseBdev2 00:19:49.447 BaseBdev3 00:19:49.447 BaseBdev4' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.447 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.706 [2024-11-20 05:30:21.335324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.706 "name": "Existed_Raid", 00:19:49.706 "uuid": "4f54f63f-dd94-4d6e-af3f-75578171c2ca", 00:19:49.706 "strip_size_kb": 0, 00:19:49.706 "state": "online", 00:19:49.706 "raid_level": "raid1", 00:19:49.706 "superblock": true, 00:19:49.706 "num_base_bdevs": 4, 00:19:49.706 "num_base_bdevs_discovered": 3, 00:19:49.706 "num_base_bdevs_operational": 3, 00:19:49.706 "base_bdevs_list": [ 00:19:49.706 { 00:19:49.706 "name": null, 00:19:49.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.706 "is_configured": false, 00:19:49.706 "data_offset": 0, 00:19:49.706 "data_size": 63488 00:19:49.706 }, 00:19:49.706 { 00:19:49.706 "name": "BaseBdev2", 00:19:49.706 "uuid": "9b2133f2-ca57-4edb-8d45-ded67a43ad08", 00:19:49.706 "is_configured": true, 00:19:49.706 "data_offset": 2048, 00:19:49.706 "data_size": 63488 00:19:49.706 }, 00:19:49.706 { 00:19:49.706 "name": "BaseBdev3", 00:19:49.706 "uuid": "a876ff34-1a6e-4cb8-8521-f82064804759", 00:19:49.706 "is_configured": true, 00:19:49.706 "data_offset": 2048, 00:19:49.706 "data_size": 63488 00:19:49.706 }, 00:19:49.706 { 00:19:49.706 "name": "BaseBdev4", 00:19:49.706 "uuid": "aa21a54e-729e-4138-b953-859803b53cb3", 00:19:49.706 "is_configured": true, 00:19:49.706 "data_offset": 2048, 00:19:49.706 "data_size": 63488 00:19:49.706 } 00:19:49.706 ] 00:19:49.706 }' 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.706 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.964 [2024-11-20 05:30:21.741184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:49.964 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.222 [2024-11-20 05:30:21.833954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.222 [2024-11-20 05:30:21.919406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:50.222 [2024-11-20 05:30:21.919505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.222 [2024-11-20 05:30:21.967996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.222 [2024-11-20 05:30:21.968047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.222 [2024-11-20 05:30:21.968058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:50.222 05:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.222 BaseBdev2 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.222 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.222 [ 00:19:50.222 { 00:19:50.222 "name": "BaseBdev2", 00:19:50.222 "aliases": [ 00:19:50.222 "14536d59-0a4d-49c7-bf91-49c9d993ed5a" 00:19:50.222 ], 00:19:50.222 "product_name": "Malloc disk", 00:19:50.222 "block_size": 512, 00:19:50.222 "num_blocks": 65536, 00:19:50.222 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:50.222 "assigned_rate_limits": { 00:19:50.222 "rw_ios_per_sec": 0, 00:19:50.222 "rw_mbytes_per_sec": 0, 00:19:50.222 "r_mbytes_per_sec": 0, 00:19:50.222 "w_mbytes_per_sec": 0 00:19:50.222 }, 00:19:50.222 "claimed": false, 00:19:50.222 "zoned": false, 00:19:50.222 "supported_io_types": { 00:19:50.222 "read": true, 00:19:50.222 "write": true, 00:19:50.222 "unmap": true, 00:19:50.222 "flush": true, 00:19:50.222 "reset": true, 00:19:50.222 "nvme_admin": false, 00:19:50.222 "nvme_io": false, 00:19:50.222 "nvme_io_md": false, 00:19:50.222 "write_zeroes": true, 00:19:50.222 "zcopy": true, 00:19:50.222 "get_zone_info": false, 00:19:50.222 "zone_management": false, 00:19:50.222 "zone_append": false, 00:19:50.222 "compare": false, 00:19:50.222 "compare_and_write": false, 00:19:50.222 "abort": true, 00:19:50.222 "seek_hole": false, 00:19:50.222 "seek_data": false, 00:19:50.222 "copy": true, 00:19:50.222 "nvme_iov_md": false 00:19:50.481 }, 00:19:50.481 "memory_domains": [ 00:19:50.481 { 00:19:50.481 "dma_device_id": "system", 00:19:50.481 "dma_device_type": 1 00:19:50.481 }, 00:19:50.481 { 00:19:50.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.481 "dma_device_type": 2 00:19:50.481 } 00:19:50.481 ], 00:19:50.481 "driver_specific": {} 00:19:50.481 } 00:19:50.481 ] 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.481 BaseBdev3 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.481 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.481 [ 00:19:50.481 { 00:19:50.481 "name": "BaseBdev3", 00:19:50.481 "aliases": [ 00:19:50.481 "bca4462e-d3ba-4df7-8b52-f75e7a2d03de" 00:19:50.481 ], 00:19:50.481 "product_name": "Malloc disk", 00:19:50.481 "block_size": 512, 00:19:50.481 "num_blocks": 65536, 00:19:50.481 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:50.481 "assigned_rate_limits": { 00:19:50.481 "rw_ios_per_sec": 0, 00:19:50.481 "rw_mbytes_per_sec": 0, 00:19:50.481 "r_mbytes_per_sec": 0, 00:19:50.481 "w_mbytes_per_sec": 0 00:19:50.481 }, 00:19:50.481 "claimed": false, 00:19:50.481 "zoned": false, 00:19:50.481 "supported_io_types": { 00:19:50.481 "read": true, 00:19:50.481 "write": true, 00:19:50.481 "unmap": true, 00:19:50.481 "flush": true, 00:19:50.481 "reset": true, 00:19:50.481 "nvme_admin": false, 00:19:50.482 "nvme_io": false, 00:19:50.482 "nvme_io_md": false, 00:19:50.482 "write_zeroes": true, 00:19:50.482 "zcopy": true, 00:19:50.482 "get_zone_info": false, 00:19:50.482 "zone_management": false, 00:19:50.482 "zone_append": false, 00:19:50.482 "compare": false, 00:19:50.482 "compare_and_write": false, 00:19:50.482 "abort": true, 00:19:50.482 "seek_hole": false, 00:19:50.482 "seek_data": false, 00:19:50.482 "copy": true, 00:19:50.482 "nvme_iov_md": false 00:19:50.482 }, 00:19:50.482 "memory_domains": [ 00:19:50.482 { 00:19:50.482 "dma_device_id": "system", 00:19:50.482 "dma_device_type": 1 00:19:50.482 }, 00:19:50.482 { 00:19:50.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.482 "dma_device_type": 2 00:19:50.482 } 00:19:50.482 ], 00:19:50.482 "driver_specific": {} 00:19:50.482 } 00:19:50.482 ] 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.482 BaseBdev4 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.482 [ 00:19:50.482 { 00:19:50.482 "name": "BaseBdev4", 00:19:50.482 "aliases": [ 00:19:50.482 "13868343-e47f-459a-9b45-7bf76b4ae0d6" 00:19:50.482 ], 00:19:50.482 "product_name": "Malloc disk", 00:19:50.482 "block_size": 512, 00:19:50.482 "num_blocks": 65536, 00:19:50.482 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:50.482 "assigned_rate_limits": { 00:19:50.482 "rw_ios_per_sec": 0, 00:19:50.482 "rw_mbytes_per_sec": 0, 00:19:50.482 "r_mbytes_per_sec": 0, 00:19:50.482 "w_mbytes_per_sec": 0 00:19:50.482 }, 00:19:50.482 "claimed": false, 00:19:50.482 "zoned": false, 00:19:50.482 "supported_io_types": { 00:19:50.482 "read": true, 00:19:50.482 "write": true, 00:19:50.482 "unmap": true, 00:19:50.482 "flush": true, 00:19:50.482 "reset": true, 00:19:50.482 "nvme_admin": false, 00:19:50.482 "nvme_io": false, 00:19:50.482 "nvme_io_md": false, 00:19:50.482 "write_zeroes": true, 00:19:50.482 "zcopy": true, 00:19:50.482 "get_zone_info": false, 00:19:50.482 "zone_management": false, 00:19:50.482 "zone_append": false, 00:19:50.482 "compare": false, 00:19:50.482 "compare_and_write": false, 00:19:50.482 "abort": true, 00:19:50.482 "seek_hole": false, 00:19:50.482 "seek_data": false, 00:19:50.482 "copy": true, 00:19:50.482 "nvme_iov_md": false 00:19:50.482 }, 00:19:50.482 "memory_domains": [ 00:19:50.482 { 00:19:50.482 "dma_device_id": "system", 00:19:50.482 "dma_device_type": 1 00:19:50.482 }, 00:19:50.482 { 00:19:50.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.482 "dma_device_type": 2 00:19:50.482 } 00:19:50.482 ], 00:19:50.482 "driver_specific": {} 00:19:50.482 } 00:19:50.482 ] 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.482 [2024-11-20 05:30:22.164349] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:50.482 [2024-11-20 05:30:22.164527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:50.482 [2024-11-20 05:30:22.164588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:50.482 [2024-11-20 05:30:22.166248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:50.482 [2024-11-20 05:30:22.166371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.482 "name": "Existed_Raid", 00:19:50.482 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:50.482 "strip_size_kb": 0, 00:19:50.482 "state": "configuring", 00:19:50.482 "raid_level": "raid1", 00:19:50.482 "superblock": true, 00:19:50.482 "num_base_bdevs": 4, 00:19:50.482 "num_base_bdevs_discovered": 3, 00:19:50.482 "num_base_bdevs_operational": 4, 00:19:50.482 "base_bdevs_list": [ 00:19:50.482 { 00:19:50.482 "name": "BaseBdev1", 00:19:50.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.482 "is_configured": false, 00:19:50.482 "data_offset": 0, 00:19:50.482 "data_size": 0 00:19:50.482 }, 00:19:50.482 { 00:19:50.482 "name": "BaseBdev2", 00:19:50.482 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:50.482 "is_configured": true, 00:19:50.482 "data_offset": 2048, 00:19:50.482 "data_size": 63488 00:19:50.482 }, 00:19:50.482 { 00:19:50.482 "name": "BaseBdev3", 00:19:50.482 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:50.482 "is_configured": true, 00:19:50.482 "data_offset": 2048, 00:19:50.482 "data_size": 63488 00:19:50.482 }, 00:19:50.482 { 00:19:50.482 "name": "BaseBdev4", 00:19:50.482 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:50.482 "is_configured": true, 00:19:50.482 "data_offset": 2048, 00:19:50.482 "data_size": 63488 00:19:50.482 } 00:19:50.482 ] 00:19:50.482 }' 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.482 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.742 [2024-11-20 05:30:22.484428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.742 "name": "Existed_Raid", 00:19:50.742 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:50.742 "strip_size_kb": 0, 00:19:50.742 "state": "configuring", 00:19:50.742 "raid_level": "raid1", 00:19:50.742 "superblock": true, 00:19:50.742 "num_base_bdevs": 4, 00:19:50.742 "num_base_bdevs_discovered": 2, 00:19:50.742 "num_base_bdevs_operational": 4, 00:19:50.742 "base_bdevs_list": [ 00:19:50.742 { 00:19:50.742 "name": "BaseBdev1", 00:19:50.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.742 "is_configured": false, 00:19:50.742 "data_offset": 0, 00:19:50.742 "data_size": 0 00:19:50.742 }, 00:19:50.742 { 00:19:50.742 "name": null, 00:19:50.742 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:50.742 "is_configured": false, 00:19:50.742 "data_offset": 0, 00:19:50.742 "data_size": 63488 00:19:50.742 }, 00:19:50.742 { 00:19:50.742 "name": "BaseBdev3", 00:19:50.742 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:50.742 "is_configured": true, 00:19:50.742 "data_offset": 2048, 00:19:50.742 "data_size": 63488 00:19:50.742 }, 00:19:50.742 { 00:19:50.742 "name": "BaseBdev4", 00:19:50.742 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:50.742 "is_configured": true, 00:19:50.742 "data_offset": 2048, 00:19:50.742 "data_size": 63488 00:19:50.742 } 00:19:50.742 ] 00:19:50.742 }' 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.742 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.001 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.001 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.001 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.001 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:51.001 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.259 [2024-11-20 05:30:22.868691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.259 BaseBdev1 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.259 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.259 [ 00:19:51.259 { 00:19:51.259 "name": "BaseBdev1", 00:19:51.259 "aliases": [ 00:19:51.259 "1bac2ea2-4ae1-4676-aa8b-dba790cb1725" 00:19:51.259 ], 00:19:51.259 "product_name": "Malloc disk", 00:19:51.259 "block_size": 512, 00:19:51.259 "num_blocks": 65536, 00:19:51.259 "uuid": "1bac2ea2-4ae1-4676-aa8b-dba790cb1725", 00:19:51.259 "assigned_rate_limits": { 00:19:51.259 "rw_ios_per_sec": 0, 00:19:51.259 "rw_mbytes_per_sec": 0, 00:19:51.259 "r_mbytes_per_sec": 0, 00:19:51.260 "w_mbytes_per_sec": 0 00:19:51.260 }, 00:19:51.260 "claimed": true, 00:19:51.260 "claim_type": "exclusive_write", 00:19:51.260 "zoned": false, 00:19:51.260 "supported_io_types": { 00:19:51.260 "read": true, 00:19:51.260 "write": true, 00:19:51.260 "unmap": true, 00:19:51.260 "flush": true, 00:19:51.260 "reset": true, 00:19:51.260 "nvme_admin": false, 00:19:51.260 "nvme_io": false, 00:19:51.260 "nvme_io_md": false, 00:19:51.260 "write_zeroes": true, 00:19:51.260 "zcopy": true, 00:19:51.260 "get_zone_info": false, 00:19:51.260 "zone_management": false, 00:19:51.260 "zone_append": false, 00:19:51.260 "compare": false, 00:19:51.260 "compare_and_write": false, 00:19:51.260 "abort": true, 00:19:51.260 "seek_hole": false, 00:19:51.260 "seek_data": false, 00:19:51.260 "copy": true, 00:19:51.260 "nvme_iov_md": false 00:19:51.260 }, 00:19:51.260 "memory_domains": [ 00:19:51.260 { 00:19:51.260 "dma_device_id": "system", 00:19:51.260 "dma_device_type": 1 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.260 "dma_device_type": 2 00:19:51.260 } 00:19:51.260 ], 00:19:51.260 "driver_specific": {} 00:19:51.260 } 00:19:51.260 ] 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.260 "name": "Existed_Raid", 00:19:51.260 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:51.260 "strip_size_kb": 0, 00:19:51.260 "state": "configuring", 00:19:51.260 "raid_level": "raid1", 00:19:51.260 "superblock": true, 00:19:51.260 "num_base_bdevs": 4, 00:19:51.260 "num_base_bdevs_discovered": 3, 00:19:51.260 "num_base_bdevs_operational": 4, 00:19:51.260 "base_bdevs_list": [ 00:19:51.260 { 00:19:51.260 "name": "BaseBdev1", 00:19:51.260 "uuid": "1bac2ea2-4ae1-4676-aa8b-dba790cb1725", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 2048, 00:19:51.260 "data_size": 63488 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "name": null, 00:19:51.260 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:51.260 "is_configured": false, 00:19:51.260 "data_offset": 0, 00:19:51.260 "data_size": 63488 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "name": "BaseBdev3", 00:19:51.260 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 2048, 00:19:51.260 "data_size": 63488 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "name": "BaseBdev4", 00:19:51.260 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 2048, 00:19:51.260 "data_size": 63488 00:19:51.260 } 00:19:51.260 ] 00:19:51.260 }' 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.260 05:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.519 [2024-11-20 05:30:23.236845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.519 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.519 "name": "Existed_Raid", 00:19:51.519 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:51.519 "strip_size_kb": 0, 00:19:51.519 "state": "configuring", 00:19:51.519 "raid_level": "raid1", 00:19:51.519 "superblock": true, 00:19:51.519 "num_base_bdevs": 4, 00:19:51.519 "num_base_bdevs_discovered": 2, 00:19:51.520 "num_base_bdevs_operational": 4, 00:19:51.520 "base_bdevs_list": [ 00:19:51.520 { 00:19:51.520 "name": "BaseBdev1", 00:19:51.520 "uuid": "1bac2ea2-4ae1-4676-aa8b-dba790cb1725", 00:19:51.520 "is_configured": true, 00:19:51.520 "data_offset": 2048, 00:19:51.520 "data_size": 63488 00:19:51.520 }, 00:19:51.520 { 00:19:51.520 "name": null, 00:19:51.520 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:51.520 "is_configured": false, 00:19:51.520 "data_offset": 0, 00:19:51.520 "data_size": 63488 00:19:51.520 }, 00:19:51.520 { 00:19:51.520 "name": null, 00:19:51.520 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:51.520 "is_configured": false, 00:19:51.520 "data_offset": 0, 00:19:51.520 "data_size": 63488 00:19:51.520 }, 00:19:51.520 { 00:19:51.520 "name": "BaseBdev4", 00:19:51.520 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:51.520 "is_configured": true, 00:19:51.520 "data_offset": 2048, 00:19:51.520 "data_size": 63488 00:19:51.520 } 00:19:51.520 ] 00:19:51.520 }' 00:19:51.520 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.520 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.779 [2024-11-20 05:30:23.592888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.779 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.038 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.038 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.038 "name": "Existed_Raid", 00:19:52.038 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:52.038 "strip_size_kb": 0, 00:19:52.038 "state": "configuring", 00:19:52.038 "raid_level": "raid1", 00:19:52.038 "superblock": true, 00:19:52.038 "num_base_bdevs": 4, 00:19:52.038 "num_base_bdevs_discovered": 3, 00:19:52.038 "num_base_bdevs_operational": 4, 00:19:52.038 "base_bdevs_list": [ 00:19:52.038 { 00:19:52.038 "name": "BaseBdev1", 00:19:52.038 "uuid": "1bac2ea2-4ae1-4676-aa8b-dba790cb1725", 00:19:52.038 "is_configured": true, 00:19:52.038 "data_offset": 2048, 00:19:52.038 "data_size": 63488 00:19:52.038 }, 00:19:52.038 { 00:19:52.038 "name": null, 00:19:52.038 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:52.038 "is_configured": false, 00:19:52.038 "data_offset": 0, 00:19:52.038 "data_size": 63488 00:19:52.038 }, 00:19:52.038 { 00:19:52.038 "name": "BaseBdev3", 00:19:52.038 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:52.038 "is_configured": true, 00:19:52.038 "data_offset": 2048, 00:19:52.038 "data_size": 63488 00:19:52.038 }, 00:19:52.038 { 00:19:52.038 "name": "BaseBdev4", 00:19:52.038 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:52.038 "is_configured": true, 00:19:52.038 "data_offset": 2048, 00:19:52.038 "data_size": 63488 00:19:52.038 } 00:19:52.038 ] 00:19:52.038 }' 00:19:52.038 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.038 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.297 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:52.297 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.297 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.297 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.297 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.297 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:52.297 05:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:52.297 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.297 05:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.297 [2024-11-20 05:30:23.952998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.297 "name": "Existed_Raid", 00:19:52.297 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:52.297 "strip_size_kb": 0, 00:19:52.297 "state": "configuring", 00:19:52.297 "raid_level": "raid1", 00:19:52.297 "superblock": true, 00:19:52.297 "num_base_bdevs": 4, 00:19:52.297 "num_base_bdevs_discovered": 2, 00:19:52.297 "num_base_bdevs_operational": 4, 00:19:52.297 "base_bdevs_list": [ 00:19:52.297 { 00:19:52.297 "name": null, 00:19:52.297 "uuid": "1bac2ea2-4ae1-4676-aa8b-dba790cb1725", 00:19:52.297 "is_configured": false, 00:19:52.297 "data_offset": 0, 00:19:52.297 "data_size": 63488 00:19:52.297 }, 00:19:52.297 { 00:19:52.297 "name": null, 00:19:52.297 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:52.297 "is_configured": false, 00:19:52.297 "data_offset": 0, 00:19:52.297 "data_size": 63488 00:19:52.297 }, 00:19:52.297 { 00:19:52.297 "name": "BaseBdev3", 00:19:52.297 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:52.297 "is_configured": true, 00:19:52.297 "data_offset": 2048, 00:19:52.297 "data_size": 63488 00:19:52.297 }, 00:19:52.297 { 00:19:52.297 "name": "BaseBdev4", 00:19:52.297 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:52.297 "is_configured": true, 00:19:52.297 "data_offset": 2048, 00:19:52.297 "data_size": 63488 00:19:52.297 } 00:19:52.297 ] 00:19:52.297 }' 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.297 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.555 [2024-11-20 05:30:24.341126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.555 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.556 "name": "Existed_Raid", 00:19:52.556 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:52.556 "strip_size_kb": 0, 00:19:52.556 "state": "configuring", 00:19:52.556 "raid_level": "raid1", 00:19:52.556 "superblock": true, 00:19:52.556 "num_base_bdevs": 4, 00:19:52.556 "num_base_bdevs_discovered": 3, 00:19:52.556 "num_base_bdevs_operational": 4, 00:19:52.556 "base_bdevs_list": [ 00:19:52.556 { 00:19:52.556 "name": null, 00:19:52.556 "uuid": "1bac2ea2-4ae1-4676-aa8b-dba790cb1725", 00:19:52.556 "is_configured": false, 00:19:52.556 "data_offset": 0, 00:19:52.556 "data_size": 63488 00:19:52.556 }, 00:19:52.556 { 00:19:52.556 "name": "BaseBdev2", 00:19:52.556 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:52.556 "is_configured": true, 00:19:52.556 "data_offset": 2048, 00:19:52.556 "data_size": 63488 00:19:52.556 }, 00:19:52.556 { 00:19:52.556 "name": "BaseBdev3", 00:19:52.556 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:52.556 "is_configured": true, 00:19:52.556 "data_offset": 2048, 00:19:52.556 "data_size": 63488 00:19:52.556 }, 00:19:52.556 { 00:19:52.556 "name": "BaseBdev4", 00:19:52.556 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:52.556 "is_configured": true, 00:19:52.556 "data_offset": 2048, 00:19:52.556 "data_size": 63488 00:19:52.556 } 00:19:52.556 ] 00:19:52.556 }' 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.556 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.814 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.814 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.814 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.814 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:52.814 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1bac2ea2-4ae1-4676-aa8b-dba790cb1725 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.087 [2024-11-20 05:30:24.728924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:53.087 [2024-11-20 05:30:24.729103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:53.087 [2024-11-20 05:30:24.729117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:53.087 NewBaseBdev 00:19:53.087 [2024-11-20 05:30:24.729327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:53.087 [2024-11-20 05:30:24.729471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:53.087 [2024-11-20 05:30:24.729479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:53.087 [2024-11-20 05:30:24.729582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.087 [ 00:19:53.087 { 00:19:53.087 "name": "NewBaseBdev", 00:19:53.087 "aliases": [ 00:19:53.087 "1bac2ea2-4ae1-4676-aa8b-dba790cb1725" 00:19:53.087 ], 00:19:53.087 "product_name": "Malloc disk", 00:19:53.087 "block_size": 512, 00:19:53.087 "num_blocks": 65536, 00:19:53.087 "uuid": "1bac2ea2-4ae1-4676-aa8b-dba790cb1725", 00:19:53.087 "assigned_rate_limits": { 00:19:53.087 "rw_ios_per_sec": 0, 00:19:53.087 "rw_mbytes_per_sec": 0, 00:19:53.087 "r_mbytes_per_sec": 0, 00:19:53.087 "w_mbytes_per_sec": 0 00:19:53.087 }, 00:19:53.087 "claimed": true, 00:19:53.087 "claim_type": "exclusive_write", 00:19:53.087 "zoned": false, 00:19:53.087 "supported_io_types": { 00:19:53.087 "read": true, 00:19:53.087 "write": true, 00:19:53.087 "unmap": true, 00:19:53.087 "flush": true, 00:19:53.087 "reset": true, 00:19:53.087 "nvme_admin": false, 00:19:53.087 "nvme_io": false, 00:19:53.087 "nvme_io_md": false, 00:19:53.087 "write_zeroes": true, 00:19:53.087 "zcopy": true, 00:19:53.087 "get_zone_info": false, 00:19:53.087 "zone_management": false, 00:19:53.087 "zone_append": false, 00:19:53.087 "compare": false, 00:19:53.087 "compare_and_write": false, 00:19:53.087 "abort": true, 00:19:53.087 "seek_hole": false, 00:19:53.087 "seek_data": false, 00:19:53.087 "copy": true, 00:19:53.087 "nvme_iov_md": false 00:19:53.087 }, 00:19:53.087 "memory_domains": [ 00:19:53.087 { 00:19:53.087 "dma_device_id": "system", 00:19:53.087 "dma_device_type": 1 00:19:53.087 }, 00:19:53.087 { 00:19:53.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.087 "dma_device_type": 2 00:19:53.087 } 00:19:53.087 ], 00:19:53.087 "driver_specific": {} 00:19:53.087 } 00:19:53.087 ] 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.087 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.087 "name": "Existed_Raid", 00:19:53.087 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:53.087 "strip_size_kb": 0, 00:19:53.087 "state": "online", 00:19:53.087 "raid_level": "raid1", 00:19:53.087 "superblock": true, 00:19:53.087 "num_base_bdevs": 4, 00:19:53.087 "num_base_bdevs_discovered": 4, 00:19:53.087 "num_base_bdevs_operational": 4, 00:19:53.087 "base_bdevs_list": [ 00:19:53.087 { 00:19:53.087 "name": "NewBaseBdev", 00:19:53.087 "uuid": "1bac2ea2-4ae1-4676-aa8b-dba790cb1725", 00:19:53.087 "is_configured": true, 00:19:53.087 "data_offset": 2048, 00:19:53.087 "data_size": 63488 00:19:53.087 }, 00:19:53.087 { 00:19:53.087 "name": "BaseBdev2", 00:19:53.087 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:53.087 "is_configured": true, 00:19:53.087 "data_offset": 2048, 00:19:53.087 "data_size": 63488 00:19:53.087 }, 00:19:53.087 { 00:19:53.087 "name": "BaseBdev3", 00:19:53.087 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:53.087 "is_configured": true, 00:19:53.087 "data_offset": 2048, 00:19:53.087 "data_size": 63488 00:19:53.087 }, 00:19:53.087 { 00:19:53.087 "name": "BaseBdev4", 00:19:53.088 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:53.088 "is_configured": true, 00:19:53.088 "data_offset": 2048, 00:19:53.088 "data_size": 63488 00:19:53.088 } 00:19:53.088 ] 00:19:53.088 }' 00:19:53.088 05:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.088 05:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:53.346 [2024-11-20 05:30:25.089381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.346 "name": "Existed_Raid", 00:19:53.346 "aliases": [ 00:19:53.346 "79632d6e-0839-4e43-9933-0b0bdd1f56d6" 00:19:53.346 ], 00:19:53.346 "product_name": "Raid Volume", 00:19:53.346 "block_size": 512, 00:19:53.346 "num_blocks": 63488, 00:19:53.346 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:53.346 "assigned_rate_limits": { 00:19:53.346 "rw_ios_per_sec": 0, 00:19:53.346 "rw_mbytes_per_sec": 0, 00:19:53.346 "r_mbytes_per_sec": 0, 00:19:53.346 "w_mbytes_per_sec": 0 00:19:53.346 }, 00:19:53.346 "claimed": false, 00:19:53.346 "zoned": false, 00:19:53.346 "supported_io_types": { 00:19:53.346 "read": true, 00:19:53.346 "write": true, 00:19:53.346 "unmap": false, 00:19:53.346 "flush": false, 00:19:53.346 "reset": true, 00:19:53.346 "nvme_admin": false, 00:19:53.346 "nvme_io": false, 00:19:53.346 "nvme_io_md": false, 00:19:53.346 "write_zeroes": true, 00:19:53.346 "zcopy": false, 00:19:53.346 "get_zone_info": false, 00:19:53.346 "zone_management": false, 00:19:53.346 "zone_append": false, 00:19:53.346 "compare": false, 00:19:53.346 "compare_and_write": false, 00:19:53.346 "abort": false, 00:19:53.346 "seek_hole": false, 00:19:53.346 "seek_data": false, 00:19:53.346 "copy": false, 00:19:53.346 "nvme_iov_md": false 00:19:53.346 }, 00:19:53.346 "memory_domains": [ 00:19:53.346 { 00:19:53.346 "dma_device_id": "system", 00:19:53.346 "dma_device_type": 1 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.346 "dma_device_type": 2 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "dma_device_id": "system", 00:19:53.346 "dma_device_type": 1 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.346 "dma_device_type": 2 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "dma_device_id": "system", 00:19:53.346 "dma_device_type": 1 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.346 "dma_device_type": 2 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "dma_device_id": "system", 00:19:53.346 "dma_device_type": 1 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.346 "dma_device_type": 2 00:19:53.346 } 00:19:53.346 ], 00:19:53.346 "driver_specific": { 00:19:53.346 "raid": { 00:19:53.346 "uuid": "79632d6e-0839-4e43-9933-0b0bdd1f56d6", 00:19:53.346 "strip_size_kb": 0, 00:19:53.346 "state": "online", 00:19:53.346 "raid_level": "raid1", 00:19:53.346 "superblock": true, 00:19:53.346 "num_base_bdevs": 4, 00:19:53.346 "num_base_bdevs_discovered": 4, 00:19:53.346 "num_base_bdevs_operational": 4, 00:19:53.346 "base_bdevs_list": [ 00:19:53.346 { 00:19:53.346 "name": "NewBaseBdev", 00:19:53.346 "uuid": "1bac2ea2-4ae1-4676-aa8b-dba790cb1725", 00:19:53.346 "is_configured": true, 00:19:53.346 "data_offset": 2048, 00:19:53.346 "data_size": 63488 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "name": "BaseBdev2", 00:19:53.346 "uuid": "14536d59-0a4d-49c7-bf91-49c9d993ed5a", 00:19:53.346 "is_configured": true, 00:19:53.346 "data_offset": 2048, 00:19:53.346 "data_size": 63488 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "name": "BaseBdev3", 00:19:53.346 "uuid": "bca4462e-d3ba-4df7-8b52-f75e7a2d03de", 00:19:53.346 "is_configured": true, 00:19:53.346 "data_offset": 2048, 00:19:53.346 "data_size": 63488 00:19:53.346 }, 00:19:53.346 { 00:19:53.346 "name": "BaseBdev4", 00:19:53.346 "uuid": "13868343-e47f-459a-9b45-7bf76b4ae0d6", 00:19:53.346 "is_configured": true, 00:19:53.346 "data_offset": 2048, 00:19:53.346 "data_size": 63488 00:19:53.346 } 00:19:53.346 ] 00:19:53.346 } 00:19:53.346 } 00:19:53.346 }' 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:53.346 BaseBdev2 00:19:53.346 BaseBdev3 00:19:53.346 BaseBdev4' 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:53.346 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.605 [2024-11-20 05:30:25.321054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:53.605 [2024-11-20 05:30:25.321080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.605 [2024-11-20 05:30:25.321145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.605 [2024-11-20 05:30:25.321423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.605 [2024-11-20 05:30:25.321435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71961 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 71961 ']' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 71961 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71961 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:53.605 killing process with pid 71961 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71961' 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 71961 00:19:53.605 [2024-11-20 05:30:25.349197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:53.605 05:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 71961 00:19:53.864 [2024-11-20 05:30:25.547466] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:54.430 ************************************ 00:19:54.430 END TEST raid_state_function_test_sb 00:19:54.430 ************************************ 00:19:54.430 05:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:54.430 00:19:54.430 real 0m8.233s 00:19:54.430 user 0m13.226s 00:19:54.430 sys 0m1.398s 00:19:54.430 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:54.430 05:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.430 05:30:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:54.430 05:30:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:54.430 05:30:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:54.430 05:30:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.430 ************************************ 00:19:54.430 START TEST raid_superblock_test 00:19:54.430 ************************************ 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72598 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72598 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72598 ']' 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:54.430 05:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.688 [2024-11-20 05:30:26.268458] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:19:54.688 [2024-11-20 05:30:26.268839] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72598 ] 00:19:54.688 [2024-11-20 05:30:26.437985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.945 [2024-11-20 05:30:26.552733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.945 [2024-11-20 05:30:26.699091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.945 [2024-11-20 05:30:26.699132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.512 malloc1 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.512 [2024-11-20 05:30:27.152055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:55.512 [2024-11-20 05:30:27.152241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.512 [2024-11-20 05:30:27.152270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:55.512 [2024-11-20 05:30:27.152281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.512 [2024-11-20 05:30:27.154557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.512 [2024-11-20 05:30:27.154589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:55.512 pt1 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.512 malloc2 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.512 [2024-11-20 05:30:27.189936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:55.512 [2024-11-20 05:30:27.189987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.512 [2024-11-20 05:30:27.190009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:55.512 [2024-11-20 05:30:27.190018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.512 [2024-11-20 05:30:27.192230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.512 [2024-11-20 05:30:27.192263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:55.512 pt2 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.512 malloc3 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.512 [2024-11-20 05:30:27.251828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:55.512 [2024-11-20 05:30:27.251885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.512 [2024-11-20 05:30:27.251907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:55.512 [2024-11-20 05:30:27.251917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.512 [2024-11-20 05:30:27.254172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.512 [2024-11-20 05:30:27.254208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:55.512 pt3 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.512 malloc4 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.512 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.512 [2024-11-20 05:30:27.293833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:55.512 [2024-11-20 05:30:27.293881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.513 [2024-11-20 05:30:27.293898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:55.513 [2024-11-20 05:30:27.293907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.513 [2024-11-20 05:30:27.296119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.513 [2024-11-20 05:30:27.296152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:55.513 pt4 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.513 [2024-11-20 05:30:27.301864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:55.513 [2024-11-20 05:30:27.303823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:55.513 [2024-11-20 05:30:27.303889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:55.513 [2024-11-20 05:30:27.303936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:55.513 [2024-11-20 05:30:27.304124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:55.513 [2024-11-20 05:30:27.304142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:55.513 [2024-11-20 05:30:27.304428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:55.513 [2024-11-20 05:30:27.304581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:55.513 [2024-11-20 05:30:27.304634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:55.513 [2024-11-20 05:30:27.304777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.513 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.771 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.771 "name": "raid_bdev1", 00:19:55.771 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:55.771 "strip_size_kb": 0, 00:19:55.771 "state": "online", 00:19:55.771 "raid_level": "raid1", 00:19:55.771 "superblock": true, 00:19:55.771 "num_base_bdevs": 4, 00:19:55.771 "num_base_bdevs_discovered": 4, 00:19:55.771 "num_base_bdevs_operational": 4, 00:19:55.771 "base_bdevs_list": [ 00:19:55.771 { 00:19:55.771 "name": "pt1", 00:19:55.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.771 "is_configured": true, 00:19:55.771 "data_offset": 2048, 00:19:55.771 "data_size": 63488 00:19:55.771 }, 00:19:55.771 { 00:19:55.771 "name": "pt2", 00:19:55.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.771 "is_configured": true, 00:19:55.771 "data_offset": 2048, 00:19:55.771 "data_size": 63488 00:19:55.771 }, 00:19:55.771 { 00:19:55.771 "name": "pt3", 00:19:55.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:55.771 "is_configured": true, 00:19:55.771 "data_offset": 2048, 00:19:55.771 "data_size": 63488 00:19:55.771 }, 00:19:55.771 { 00:19:55.771 "name": "pt4", 00:19:55.771 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:55.771 "is_configured": true, 00:19:55.771 "data_offset": 2048, 00:19:55.771 "data_size": 63488 00:19:55.771 } 00:19:55.771 ] 00:19:55.771 }' 00:19:55.771 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.771 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.029 [2024-11-20 05:30:27.642328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.029 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:56.029 "name": "raid_bdev1", 00:19:56.029 "aliases": [ 00:19:56.029 "66c413e5-348e-41ff-af04-405ced3472cd" 00:19:56.029 ], 00:19:56.029 "product_name": "Raid Volume", 00:19:56.029 "block_size": 512, 00:19:56.029 "num_blocks": 63488, 00:19:56.029 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:56.029 "assigned_rate_limits": { 00:19:56.029 "rw_ios_per_sec": 0, 00:19:56.029 "rw_mbytes_per_sec": 0, 00:19:56.029 "r_mbytes_per_sec": 0, 00:19:56.029 "w_mbytes_per_sec": 0 00:19:56.029 }, 00:19:56.029 "claimed": false, 00:19:56.029 "zoned": false, 00:19:56.029 "supported_io_types": { 00:19:56.029 "read": true, 00:19:56.029 "write": true, 00:19:56.029 "unmap": false, 00:19:56.029 "flush": false, 00:19:56.029 "reset": true, 00:19:56.029 "nvme_admin": false, 00:19:56.029 "nvme_io": false, 00:19:56.029 "nvme_io_md": false, 00:19:56.029 "write_zeroes": true, 00:19:56.029 "zcopy": false, 00:19:56.029 "get_zone_info": false, 00:19:56.029 "zone_management": false, 00:19:56.029 "zone_append": false, 00:19:56.029 "compare": false, 00:19:56.029 "compare_and_write": false, 00:19:56.029 "abort": false, 00:19:56.029 "seek_hole": false, 00:19:56.029 "seek_data": false, 00:19:56.029 "copy": false, 00:19:56.029 "nvme_iov_md": false 00:19:56.029 }, 00:19:56.029 "memory_domains": [ 00:19:56.029 { 00:19:56.029 "dma_device_id": "system", 00:19:56.029 "dma_device_type": 1 00:19:56.029 }, 00:19:56.029 { 00:19:56.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.029 "dma_device_type": 2 00:19:56.029 }, 00:19:56.029 { 00:19:56.029 "dma_device_id": "system", 00:19:56.029 "dma_device_type": 1 00:19:56.029 }, 00:19:56.029 { 00:19:56.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.029 "dma_device_type": 2 00:19:56.029 }, 00:19:56.029 { 00:19:56.029 "dma_device_id": "system", 00:19:56.029 "dma_device_type": 1 00:19:56.029 }, 00:19:56.029 { 00:19:56.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.029 "dma_device_type": 2 00:19:56.029 }, 00:19:56.029 { 00:19:56.029 "dma_device_id": "system", 00:19:56.029 "dma_device_type": 1 00:19:56.029 }, 00:19:56.029 { 00:19:56.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.029 "dma_device_type": 2 00:19:56.029 } 00:19:56.029 ], 00:19:56.030 "driver_specific": { 00:19:56.030 "raid": { 00:19:56.030 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:56.030 "strip_size_kb": 0, 00:19:56.030 "state": "online", 00:19:56.030 "raid_level": "raid1", 00:19:56.030 "superblock": true, 00:19:56.030 "num_base_bdevs": 4, 00:19:56.030 "num_base_bdevs_discovered": 4, 00:19:56.030 "num_base_bdevs_operational": 4, 00:19:56.030 "base_bdevs_list": [ 00:19:56.030 { 00:19:56.030 "name": "pt1", 00:19:56.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.030 "is_configured": true, 00:19:56.030 "data_offset": 2048, 00:19:56.030 "data_size": 63488 00:19:56.030 }, 00:19:56.030 { 00:19:56.030 "name": "pt2", 00:19:56.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.030 "is_configured": true, 00:19:56.030 "data_offset": 2048, 00:19:56.030 "data_size": 63488 00:19:56.030 }, 00:19:56.030 { 00:19:56.030 "name": "pt3", 00:19:56.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.030 "is_configured": true, 00:19:56.030 "data_offset": 2048, 00:19:56.030 "data_size": 63488 00:19:56.030 }, 00:19:56.030 { 00:19:56.030 "name": "pt4", 00:19:56.030 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:56.030 "is_configured": true, 00:19:56.030 "data_offset": 2048, 00:19:56.030 "data_size": 63488 00:19:56.030 } 00:19:56.030 ] 00:19:56.030 } 00:19:56.030 } 00:19:56.030 }' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:56.030 pt2 00:19:56.030 pt3 00:19:56.030 pt4' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.030 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:56.288 [2024-11-20 05:30:27.886341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=66c413e5-348e-41ff-af04-405ced3472cd 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 66c413e5-348e-41ff-af04-405ced3472cd ']' 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.288 [2024-11-20 05:30:27.917997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.288 [2024-11-20 05:30:27.918020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:56.288 [2024-11-20 05:30:27.918097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.288 [2024-11-20 05:30:27.918191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.288 [2024-11-20 05:30:27.918206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.288 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.289 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:56.289 05:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:56.289 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.289 05:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.289 [2024-11-20 05:30:28.034070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:56.289 [2024-11-20 05:30:28.036218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:56.289 [2024-11-20 05:30:28.036351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:56.289 [2024-11-20 05:30:28.036468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:56.289 [2024-11-20 05:30:28.036546] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:56.289 [2024-11-20 05:30:28.036760] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:56.289 [2024-11-20 05:30:28.037216] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:56.289 [2024-11-20 05:30:28.037361] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:56.289 [2024-11-20 05:30:28.037445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.289 [2024-11-20 05:30:28.037492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:56.289 request: 00:19:56.289 { 00:19:56.289 "name": "raid_bdev1", 00:19:56.289 "raid_level": "raid1", 00:19:56.289 "base_bdevs": [ 00:19:56.289 "malloc1", 00:19:56.289 "malloc2", 00:19:56.289 "malloc3", 00:19:56.289 "malloc4" 00:19:56.289 ], 00:19:56.289 "superblock": false, 00:19:56.289 "method": "bdev_raid_create", 00:19:56.289 "req_id": 1 00:19:56.289 } 00:19:56.289 Got JSON-RPC error response 00:19:56.289 response: 00:19:56.289 { 00:19:56.289 "code": -17, 00:19:56.289 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:56.289 } 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.289 [2024-11-20 05:30:28.078119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:56.289 [2024-11-20 05:30:28.078174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.289 [2024-11-20 05:30:28.078193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:56.289 [2024-11-20 05:30:28.078205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.289 [2024-11-20 05:30:28.080551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.289 [2024-11-20 05:30:28.080588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:56.289 [2024-11-20 05:30:28.080661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:56.289 [2024-11-20 05:30:28.080716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:56.289 pt1 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.289 "name": "raid_bdev1", 00:19:56.289 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:56.289 "strip_size_kb": 0, 00:19:56.289 "state": "configuring", 00:19:56.289 "raid_level": "raid1", 00:19:56.289 "superblock": true, 00:19:56.289 "num_base_bdevs": 4, 00:19:56.289 "num_base_bdevs_discovered": 1, 00:19:56.289 "num_base_bdevs_operational": 4, 00:19:56.289 "base_bdevs_list": [ 00:19:56.289 { 00:19:56.289 "name": "pt1", 00:19:56.289 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.289 "is_configured": true, 00:19:56.289 "data_offset": 2048, 00:19:56.289 "data_size": 63488 00:19:56.289 }, 00:19:56.289 { 00:19:56.289 "name": null, 00:19:56.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.289 "is_configured": false, 00:19:56.289 "data_offset": 2048, 00:19:56.289 "data_size": 63488 00:19:56.289 }, 00:19:56.289 { 00:19:56.289 "name": null, 00:19:56.289 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.289 "is_configured": false, 00:19:56.289 "data_offset": 2048, 00:19:56.289 "data_size": 63488 00:19:56.289 }, 00:19:56.289 { 00:19:56.289 "name": null, 00:19:56.289 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:56.289 "is_configured": false, 00:19:56.289 "data_offset": 2048, 00:19:56.289 "data_size": 63488 00:19:56.289 } 00:19:56.289 ] 00:19:56.289 }' 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.289 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.860 [2024-11-20 05:30:28.414249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:56.860 [2024-11-20 05:30:28.414323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.860 [2024-11-20 05:30:28.414345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:56.860 [2024-11-20 05:30:28.414356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.860 [2024-11-20 05:30:28.414822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.860 [2024-11-20 05:30:28.414838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:56.860 [2024-11-20 05:30:28.414920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:56.860 [2024-11-20 05:30:28.414948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:56.860 pt2 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.860 [2024-11-20 05:30:28.422238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.860 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.861 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.861 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.861 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.861 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.861 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.861 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.861 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.861 "name": "raid_bdev1", 00:19:56.861 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:56.861 "strip_size_kb": 0, 00:19:56.861 "state": "configuring", 00:19:56.861 "raid_level": "raid1", 00:19:56.861 "superblock": true, 00:19:56.861 "num_base_bdevs": 4, 00:19:56.861 "num_base_bdevs_discovered": 1, 00:19:56.861 "num_base_bdevs_operational": 4, 00:19:56.861 "base_bdevs_list": [ 00:19:56.861 { 00:19:56.861 "name": "pt1", 00:19:56.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.861 "is_configured": true, 00:19:56.861 "data_offset": 2048, 00:19:56.861 "data_size": 63488 00:19:56.861 }, 00:19:56.861 { 00:19:56.861 "name": null, 00:19:56.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.861 "is_configured": false, 00:19:56.861 "data_offset": 0, 00:19:56.861 "data_size": 63488 00:19:56.861 }, 00:19:56.861 { 00:19:56.861 "name": null, 00:19:56.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.861 "is_configured": false, 00:19:56.861 "data_offset": 2048, 00:19:56.861 "data_size": 63488 00:19:56.861 }, 00:19:56.861 { 00:19:56.861 "name": null, 00:19:56.861 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:56.861 "is_configured": false, 00:19:56.861 "data_offset": 2048, 00:19:56.861 "data_size": 63488 00:19:56.861 } 00:19:56.861 ] 00:19:56.861 }' 00:19:56.861 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.861 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.123 [2024-11-20 05:30:28.746303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:57.123 [2024-11-20 05:30:28.746494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.123 [2024-11-20 05:30:28.746528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:57.123 [2024-11-20 05:30:28.746536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.123 [2024-11-20 05:30:28.746959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.123 [2024-11-20 05:30:28.746978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:57.123 [2024-11-20 05:30:28.747056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:57.123 [2024-11-20 05:30:28.747076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:57.123 pt2 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.123 [2024-11-20 05:30:28.754271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:57.123 [2024-11-20 05:30:28.754407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.123 [2024-11-20 05:30:28.754429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:57.123 [2024-11-20 05:30:28.754436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.123 [2024-11-20 05:30:28.754757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.123 [2024-11-20 05:30:28.754774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:57.123 [2024-11-20 05:30:28.754828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:57.123 [2024-11-20 05:30:28.754843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:57.123 pt3 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.123 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.123 [2024-11-20 05:30:28.762237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:57.123 [2024-11-20 05:30:28.762345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.123 [2024-11-20 05:30:28.762375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:57.124 [2024-11-20 05:30:28.762382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.124 [2024-11-20 05:30:28.762690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.124 [2024-11-20 05:30:28.762707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:57.124 [2024-11-20 05:30:28.762752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:57.124 [2024-11-20 05:30:28.762767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:57.124 [2024-11-20 05:30:28.762885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:57.124 [2024-11-20 05:30:28.762892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:57.124 [2024-11-20 05:30:28.763089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:57.124 [2024-11-20 05:30:28.763207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:57.124 [2024-11-20 05:30:28.763215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:57.124 [2024-11-20 05:30:28.763317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.124 pt4 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.124 "name": "raid_bdev1", 00:19:57.124 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:57.124 "strip_size_kb": 0, 00:19:57.124 "state": "online", 00:19:57.124 "raid_level": "raid1", 00:19:57.124 "superblock": true, 00:19:57.124 "num_base_bdevs": 4, 00:19:57.124 "num_base_bdevs_discovered": 4, 00:19:57.124 "num_base_bdevs_operational": 4, 00:19:57.124 "base_bdevs_list": [ 00:19:57.124 { 00:19:57.124 "name": "pt1", 00:19:57.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.124 "is_configured": true, 00:19:57.124 "data_offset": 2048, 00:19:57.124 "data_size": 63488 00:19:57.124 }, 00:19:57.124 { 00:19:57.124 "name": "pt2", 00:19:57.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.124 "is_configured": true, 00:19:57.124 "data_offset": 2048, 00:19:57.124 "data_size": 63488 00:19:57.124 }, 00:19:57.124 { 00:19:57.124 "name": "pt3", 00:19:57.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.124 "is_configured": true, 00:19:57.124 "data_offset": 2048, 00:19:57.124 "data_size": 63488 00:19:57.124 }, 00:19:57.124 { 00:19:57.124 "name": "pt4", 00:19:57.124 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.124 "is_configured": true, 00:19:57.124 "data_offset": 2048, 00:19:57.124 "data_size": 63488 00:19:57.124 } 00:19:57.124 ] 00:19:57.124 }' 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.124 05:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.383 [2024-11-20 05:30:29.090697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:57.383 "name": "raid_bdev1", 00:19:57.383 "aliases": [ 00:19:57.383 "66c413e5-348e-41ff-af04-405ced3472cd" 00:19:57.383 ], 00:19:57.383 "product_name": "Raid Volume", 00:19:57.383 "block_size": 512, 00:19:57.383 "num_blocks": 63488, 00:19:57.383 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:57.383 "assigned_rate_limits": { 00:19:57.383 "rw_ios_per_sec": 0, 00:19:57.383 "rw_mbytes_per_sec": 0, 00:19:57.383 "r_mbytes_per_sec": 0, 00:19:57.383 "w_mbytes_per_sec": 0 00:19:57.383 }, 00:19:57.383 "claimed": false, 00:19:57.383 "zoned": false, 00:19:57.383 "supported_io_types": { 00:19:57.383 "read": true, 00:19:57.383 "write": true, 00:19:57.383 "unmap": false, 00:19:57.383 "flush": false, 00:19:57.383 "reset": true, 00:19:57.383 "nvme_admin": false, 00:19:57.383 "nvme_io": false, 00:19:57.383 "nvme_io_md": false, 00:19:57.383 "write_zeroes": true, 00:19:57.383 "zcopy": false, 00:19:57.383 "get_zone_info": false, 00:19:57.383 "zone_management": false, 00:19:57.383 "zone_append": false, 00:19:57.383 "compare": false, 00:19:57.383 "compare_and_write": false, 00:19:57.383 "abort": false, 00:19:57.383 "seek_hole": false, 00:19:57.383 "seek_data": false, 00:19:57.383 "copy": false, 00:19:57.383 "nvme_iov_md": false 00:19:57.383 }, 00:19:57.383 "memory_domains": [ 00:19:57.383 { 00:19:57.383 "dma_device_id": "system", 00:19:57.383 "dma_device_type": 1 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.383 "dma_device_type": 2 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "dma_device_id": "system", 00:19:57.383 "dma_device_type": 1 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.383 "dma_device_type": 2 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "dma_device_id": "system", 00:19:57.383 "dma_device_type": 1 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.383 "dma_device_type": 2 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "dma_device_id": "system", 00:19:57.383 "dma_device_type": 1 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.383 "dma_device_type": 2 00:19:57.383 } 00:19:57.383 ], 00:19:57.383 "driver_specific": { 00:19:57.383 "raid": { 00:19:57.383 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:57.383 "strip_size_kb": 0, 00:19:57.383 "state": "online", 00:19:57.383 "raid_level": "raid1", 00:19:57.383 "superblock": true, 00:19:57.383 "num_base_bdevs": 4, 00:19:57.383 "num_base_bdevs_discovered": 4, 00:19:57.383 "num_base_bdevs_operational": 4, 00:19:57.383 "base_bdevs_list": [ 00:19:57.383 { 00:19:57.383 "name": "pt1", 00:19:57.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.383 "is_configured": true, 00:19:57.383 "data_offset": 2048, 00:19:57.383 "data_size": 63488 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "name": "pt2", 00:19:57.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.383 "is_configured": true, 00:19:57.383 "data_offset": 2048, 00:19:57.383 "data_size": 63488 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "name": "pt3", 00:19:57.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.383 "is_configured": true, 00:19:57.383 "data_offset": 2048, 00:19:57.383 "data_size": 63488 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "name": "pt4", 00:19:57.383 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.383 "is_configured": true, 00:19:57.383 "data_offset": 2048, 00:19:57.383 "data_size": 63488 00:19:57.383 } 00:19:57.383 ] 00:19:57.383 } 00:19:57.383 } 00:19:57.383 }' 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:57.383 pt2 00:19:57.383 pt3 00:19:57.383 pt4' 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.383 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.641 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.642 [2024-11-20 05:30:29.334710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 66c413e5-348e-41ff-af04-405ced3472cd '!=' 66c413e5-348e-41ff-af04-405ced3472cd ']' 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.642 [2024-11-20 05:30:29.362461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.642 "name": "raid_bdev1", 00:19:57.642 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:57.642 "strip_size_kb": 0, 00:19:57.642 "state": "online", 00:19:57.642 "raid_level": "raid1", 00:19:57.642 "superblock": true, 00:19:57.642 "num_base_bdevs": 4, 00:19:57.642 "num_base_bdevs_discovered": 3, 00:19:57.642 "num_base_bdevs_operational": 3, 00:19:57.642 "base_bdevs_list": [ 00:19:57.642 { 00:19:57.642 "name": null, 00:19:57.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.642 "is_configured": false, 00:19:57.642 "data_offset": 0, 00:19:57.642 "data_size": 63488 00:19:57.642 }, 00:19:57.642 { 00:19:57.642 "name": "pt2", 00:19:57.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.642 "is_configured": true, 00:19:57.642 "data_offset": 2048, 00:19:57.642 "data_size": 63488 00:19:57.642 }, 00:19:57.642 { 00:19:57.642 "name": "pt3", 00:19:57.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.642 "is_configured": true, 00:19:57.642 "data_offset": 2048, 00:19:57.642 "data_size": 63488 00:19:57.642 }, 00:19:57.642 { 00:19:57.642 "name": "pt4", 00:19:57.642 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.642 "is_configured": true, 00:19:57.642 "data_offset": 2048, 00:19:57.642 "data_size": 63488 00:19:57.642 } 00:19:57.642 ] 00:19:57.642 }' 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.642 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.208 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.208 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.208 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.208 [2024-11-20 05:30:29.742530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.208 [2024-11-20 05:30:29.742560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.209 [2024-11-20 05:30:29.742638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.209 [2024-11-20 05:30:29.742711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.209 [2024-11-20 05:30:29.742719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.209 [2024-11-20 05:30:29.802464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:58.209 [2024-11-20 05:30:29.802514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.209 [2024-11-20 05:30:29.802531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:58.209 [2024-11-20 05:30:29.802539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.209 [2024-11-20 05:30:29.804552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.209 [2024-11-20 05:30:29.804684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:58.209 [2024-11-20 05:30:29.804766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:58.209 [2024-11-20 05:30:29.804807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.209 pt2 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.209 "name": "raid_bdev1", 00:19:58.209 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:58.209 "strip_size_kb": 0, 00:19:58.209 "state": "configuring", 00:19:58.209 "raid_level": "raid1", 00:19:58.209 "superblock": true, 00:19:58.209 "num_base_bdevs": 4, 00:19:58.209 "num_base_bdevs_discovered": 1, 00:19:58.209 "num_base_bdevs_operational": 3, 00:19:58.209 "base_bdevs_list": [ 00:19:58.209 { 00:19:58.209 "name": null, 00:19:58.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.209 "is_configured": false, 00:19:58.209 "data_offset": 2048, 00:19:58.209 "data_size": 63488 00:19:58.209 }, 00:19:58.209 { 00:19:58.209 "name": "pt2", 00:19:58.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.209 "is_configured": true, 00:19:58.209 "data_offset": 2048, 00:19:58.209 "data_size": 63488 00:19:58.209 }, 00:19:58.209 { 00:19:58.209 "name": null, 00:19:58.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.209 "is_configured": false, 00:19:58.209 "data_offset": 2048, 00:19:58.209 "data_size": 63488 00:19:58.209 }, 00:19:58.209 { 00:19:58.209 "name": null, 00:19:58.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.209 "is_configured": false, 00:19:58.209 "data_offset": 2048, 00:19:58.209 "data_size": 63488 00:19:58.209 } 00:19:58.209 ] 00:19:58.209 }' 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.209 05:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.468 [2024-11-20 05:30:30.130575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:58.468 [2024-11-20 05:30:30.130636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.468 [2024-11-20 05:30:30.130657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:58.468 [2024-11-20 05:30:30.130665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.468 [2024-11-20 05:30:30.131080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.468 [2024-11-20 05:30:30.131092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:58.468 [2024-11-20 05:30:30.131165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:58.468 [2024-11-20 05:30:30.131184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:58.468 pt3 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.468 "name": "raid_bdev1", 00:19:58.468 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:58.468 "strip_size_kb": 0, 00:19:58.468 "state": "configuring", 00:19:58.468 "raid_level": "raid1", 00:19:58.468 "superblock": true, 00:19:58.468 "num_base_bdevs": 4, 00:19:58.468 "num_base_bdevs_discovered": 2, 00:19:58.468 "num_base_bdevs_operational": 3, 00:19:58.468 "base_bdevs_list": [ 00:19:58.468 { 00:19:58.468 "name": null, 00:19:58.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.468 "is_configured": false, 00:19:58.468 "data_offset": 2048, 00:19:58.468 "data_size": 63488 00:19:58.468 }, 00:19:58.468 { 00:19:58.468 "name": "pt2", 00:19:58.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.468 "is_configured": true, 00:19:58.468 "data_offset": 2048, 00:19:58.468 "data_size": 63488 00:19:58.468 }, 00:19:58.468 { 00:19:58.468 "name": "pt3", 00:19:58.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.468 "is_configured": true, 00:19:58.468 "data_offset": 2048, 00:19:58.468 "data_size": 63488 00:19:58.468 }, 00:19:58.468 { 00:19:58.468 "name": null, 00:19:58.468 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.468 "is_configured": false, 00:19:58.468 "data_offset": 2048, 00:19:58.468 "data_size": 63488 00:19:58.468 } 00:19:58.468 ] 00:19:58.468 }' 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.468 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.727 [2024-11-20 05:30:30.458648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:58.727 [2024-11-20 05:30:30.458805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.727 [2024-11-20 05:30:30.458831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:58.727 [2024-11-20 05:30:30.458839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.727 [2024-11-20 05:30:30.459235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.727 [2024-11-20 05:30:30.459246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:58.727 [2024-11-20 05:30:30.459318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:58.727 [2024-11-20 05:30:30.459339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:58.727 [2024-11-20 05:30:30.459471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:58.727 [2024-11-20 05:30:30.459478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:58.727 [2024-11-20 05:30:30.459686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:58.727 [2024-11-20 05:30:30.459811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:58.727 [2024-11-20 05:30:30.459826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:58.727 [2024-11-20 05:30:30.459937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.727 pt4 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.727 "name": "raid_bdev1", 00:19:58.727 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:58.727 "strip_size_kb": 0, 00:19:58.727 "state": "online", 00:19:58.727 "raid_level": "raid1", 00:19:58.727 "superblock": true, 00:19:58.727 "num_base_bdevs": 4, 00:19:58.727 "num_base_bdevs_discovered": 3, 00:19:58.727 "num_base_bdevs_operational": 3, 00:19:58.727 "base_bdevs_list": [ 00:19:58.727 { 00:19:58.727 "name": null, 00:19:58.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.727 "is_configured": false, 00:19:58.727 "data_offset": 2048, 00:19:58.727 "data_size": 63488 00:19:58.727 }, 00:19:58.727 { 00:19:58.727 "name": "pt2", 00:19:58.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.727 "is_configured": true, 00:19:58.727 "data_offset": 2048, 00:19:58.727 "data_size": 63488 00:19:58.727 }, 00:19:58.727 { 00:19:58.727 "name": "pt3", 00:19:58.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.727 "is_configured": true, 00:19:58.727 "data_offset": 2048, 00:19:58.727 "data_size": 63488 00:19:58.727 }, 00:19:58.727 { 00:19:58.727 "name": "pt4", 00:19:58.727 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.727 "is_configured": true, 00:19:58.727 "data_offset": 2048, 00:19:58.727 "data_size": 63488 00:19:58.727 } 00:19:58.727 ] 00:19:58.727 }' 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.727 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.985 [2024-11-20 05:30:30.774664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.985 [2024-11-20 05:30:30.774690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.985 [2024-11-20 05:30:30.774762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.985 [2024-11-20 05:30:30.774828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.985 [2024-11-20 05:30:30.774838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.985 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.243 [2024-11-20 05:30:30.826682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:59.243 [2024-11-20 05:30:30.826748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.243 [2024-11-20 05:30:30.826764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:59.243 [2024-11-20 05:30:30.826773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.243 [2024-11-20 05:30:30.828839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.243 [2024-11-20 05:30:30.828873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:59.243 [2024-11-20 05:30:30.828951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:59.243 [2024-11-20 05:30:30.828992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:59.243 [2024-11-20 05:30:30.829096] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:59.243 [2024-11-20 05:30:30.829107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.243 [2024-11-20 05:30:30.829119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:59.243 [2024-11-20 05:30:30.829167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:59.243 [2024-11-20 05:30:30.829255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:59.243 pt1 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.243 "name": "raid_bdev1", 00:19:59.243 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:59.243 "strip_size_kb": 0, 00:19:59.243 "state": "configuring", 00:19:59.243 "raid_level": "raid1", 00:19:59.243 "superblock": true, 00:19:59.243 "num_base_bdevs": 4, 00:19:59.243 "num_base_bdevs_discovered": 2, 00:19:59.243 "num_base_bdevs_operational": 3, 00:19:59.243 "base_bdevs_list": [ 00:19:59.243 { 00:19:59.243 "name": null, 00:19:59.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.243 "is_configured": false, 00:19:59.243 "data_offset": 2048, 00:19:59.243 "data_size": 63488 00:19:59.243 }, 00:19:59.243 { 00:19:59.243 "name": "pt2", 00:19:59.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.243 "is_configured": true, 00:19:59.243 "data_offset": 2048, 00:19:59.243 "data_size": 63488 00:19:59.243 }, 00:19:59.243 { 00:19:59.243 "name": "pt3", 00:19:59.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:59.243 "is_configured": true, 00:19:59.243 "data_offset": 2048, 00:19:59.243 "data_size": 63488 00:19:59.243 }, 00:19:59.243 { 00:19:59.243 "name": null, 00:19:59.243 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:59.243 "is_configured": false, 00:19:59.243 "data_offset": 2048, 00:19:59.243 "data_size": 63488 00:19:59.243 } 00:19:59.243 ] 00:19:59.243 }' 00:19:59.243 05:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.244 05:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.502 [2024-11-20 05:30:31.170765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:59.502 [2024-11-20 05:30:31.170833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.502 [2024-11-20 05:30:31.170851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:59.502 [2024-11-20 05:30:31.170859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.502 [2024-11-20 05:30:31.171244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.502 [2024-11-20 05:30:31.171260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:59.502 [2024-11-20 05:30:31.171331] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:59.502 [2024-11-20 05:30:31.171354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:59.502 [2024-11-20 05:30:31.171475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:59.502 [2024-11-20 05:30:31.171483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:59.502 [2024-11-20 05:30:31.171698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:59.502 [2024-11-20 05:30:31.171822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:59.502 [2024-11-20 05:30:31.171835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:59.502 [2024-11-20 05:30:31.171948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.502 pt4 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.502 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.503 "name": "raid_bdev1", 00:19:59.503 "uuid": "66c413e5-348e-41ff-af04-405ced3472cd", 00:19:59.503 "strip_size_kb": 0, 00:19:59.503 "state": "online", 00:19:59.503 "raid_level": "raid1", 00:19:59.503 "superblock": true, 00:19:59.503 "num_base_bdevs": 4, 00:19:59.503 "num_base_bdevs_discovered": 3, 00:19:59.503 "num_base_bdevs_operational": 3, 00:19:59.503 "base_bdevs_list": [ 00:19:59.503 { 00:19:59.503 "name": null, 00:19:59.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.503 "is_configured": false, 00:19:59.503 "data_offset": 2048, 00:19:59.503 "data_size": 63488 00:19:59.503 }, 00:19:59.503 { 00:19:59.503 "name": "pt2", 00:19:59.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.503 "is_configured": true, 00:19:59.503 "data_offset": 2048, 00:19:59.503 "data_size": 63488 00:19:59.503 }, 00:19:59.503 { 00:19:59.503 "name": "pt3", 00:19:59.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:59.503 "is_configured": true, 00:19:59.503 "data_offset": 2048, 00:19:59.503 "data_size": 63488 00:19:59.503 }, 00:19:59.503 { 00:19:59.503 "name": "pt4", 00:19:59.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:59.503 "is_configured": true, 00:19:59.503 "data_offset": 2048, 00:19:59.503 "data_size": 63488 00:19:59.503 } 00:19:59.503 ] 00:19:59.503 }' 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.503 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.762 [2024-11-20 05:30:31.535080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 66c413e5-348e-41ff-af04-405ced3472cd '!=' 66c413e5-348e-41ff-af04-405ced3472cd ']' 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72598 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72598 ']' 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72598 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72598 00:19:59.762 killing process with pid 72598 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72598' 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72598 00:19:59.762 05:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72598 00:19:59.762 [2024-11-20 05:30:31.587406] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.762 [2024-11-20 05:30:31.587496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.762 [2024-11-20 05:30:31.587567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.762 [2024-11-20 05:30:31.587577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:00.021 [2024-11-20 05:30:31.788412] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.587 ************************************ 00:20:00.587 END TEST raid_superblock_test 00:20:00.587 ************************************ 00:20:00.587 05:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:00.587 00:20:00.587 real 0m6.181s 00:20:00.587 user 0m9.842s 00:20:00.587 sys 0m1.044s 00:20:00.588 05:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:00.588 05:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.588 05:30:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:20:00.588 05:30:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:00.588 05:30:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:00.588 05:30:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.846 ************************************ 00:20:00.846 START TEST raid_read_error_test 00:20:00.846 ************************************ 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:00.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jYeUKv3C2j 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73063 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73063 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73063 ']' 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.846 05:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.846 [2024-11-20 05:30:32.500100] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:20:00.847 [2024-11-20 05:30:32.500233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73063 ] 00:20:00.847 [2024-11-20 05:30:32.657320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.108 [2024-11-20 05:30:32.758120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.108 [2024-11-20 05:30:32.879891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.108 [2024-11-20 05:30:32.879939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 BaseBdev1_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 true 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 [2024-11-20 05:30:33.388864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:01.686 [2024-11-20 05:30:33.388923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.686 [2024-11-20 05:30:33.388939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:01.686 [2024-11-20 05:30:33.388948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.686 [2024-11-20 05:30:33.390807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.686 [2024-11-20 05:30:33.390842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:01.686 BaseBdev1 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 BaseBdev2_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 true 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 [2024-11-20 05:30:33.430205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:01.686 [2024-11-20 05:30:33.430249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.686 [2024-11-20 05:30:33.430262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:01.686 [2024-11-20 05:30:33.430270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.686 [2024-11-20 05:30:33.432104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.686 [2024-11-20 05:30:33.432265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:01.686 BaseBdev2 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 BaseBdev3_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 true 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.686 [2024-11-20 05:30:33.487951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:01.686 [2024-11-20 05:30:33.487996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.686 [2024-11-20 05:30:33.488010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:01.686 [2024-11-20 05:30:33.488020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.686 [2024-11-20 05:30:33.489861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.686 [2024-11-20 05:30:33.490021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:01.686 BaseBdev3 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.686 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 BaseBdev4_malloc 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 true 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 [2024-11-20 05:30:33.529376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:01.957 [2024-11-20 05:30:33.529423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.957 [2024-11-20 05:30:33.529438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:01.957 [2024-11-20 05:30:33.529448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.957 [2024-11-20 05:30:33.531277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.957 [2024-11-20 05:30:33.531312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:01.957 BaseBdev4 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 [2024-11-20 05:30:33.537432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.957 [2024-11-20 05:30:33.539036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:01.957 [2024-11-20 05:30:33.539101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:01.957 [2024-11-20 05:30:33.539153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:01.957 [2024-11-20 05:30:33.539343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:20:01.957 [2024-11-20 05:30:33.539353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:01.957 [2024-11-20 05:30:33.539567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:20:01.957 [2024-11-20 05:30:33.539697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:20:01.957 [2024-11-20 05:30:33.539713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:20:01.957 [2024-11-20 05:30:33.539828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.957 "name": "raid_bdev1", 00:20:01.957 "uuid": "3a2dd803-a5de-49cc-8111-c8c27fd6c89d", 00:20:01.958 "strip_size_kb": 0, 00:20:01.958 "state": "online", 00:20:01.958 "raid_level": "raid1", 00:20:01.958 "superblock": true, 00:20:01.958 "num_base_bdevs": 4, 00:20:01.958 "num_base_bdevs_discovered": 4, 00:20:01.958 "num_base_bdevs_operational": 4, 00:20:01.958 "base_bdevs_list": [ 00:20:01.958 { 00:20:01.958 "name": "BaseBdev1", 00:20:01.958 "uuid": "80743a68-9a4d-5c93-9a3b-b77e0dd89ad6", 00:20:01.958 "is_configured": true, 00:20:01.958 "data_offset": 2048, 00:20:01.958 "data_size": 63488 00:20:01.958 }, 00:20:01.958 { 00:20:01.958 "name": "BaseBdev2", 00:20:01.958 "uuid": "43b2cdcc-637f-5c05-aa40-d91aa2555d23", 00:20:01.958 "is_configured": true, 00:20:01.958 "data_offset": 2048, 00:20:01.958 "data_size": 63488 00:20:01.958 }, 00:20:01.958 { 00:20:01.958 "name": "BaseBdev3", 00:20:01.958 "uuid": "cd1470b7-31c8-577a-aae3-ce8ace3188c1", 00:20:01.958 "is_configured": true, 00:20:01.958 "data_offset": 2048, 00:20:01.958 "data_size": 63488 00:20:01.958 }, 00:20:01.958 { 00:20:01.958 "name": "BaseBdev4", 00:20:01.958 "uuid": "ddabd75f-02f2-5e2f-9544-6ddc0e7ac664", 00:20:01.958 "is_configured": true, 00:20:01.958 "data_offset": 2048, 00:20:01.958 "data_size": 63488 00:20:01.958 } 00:20:01.958 ] 00:20:01.958 }' 00:20:01.958 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.958 05:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.215 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:02.215 05:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:02.215 [2024-11-20 05:30:33.942359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.149 05:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.150 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.150 "name": "raid_bdev1", 00:20:03.150 "uuid": "3a2dd803-a5de-49cc-8111-c8c27fd6c89d", 00:20:03.150 "strip_size_kb": 0, 00:20:03.150 "state": "online", 00:20:03.150 "raid_level": "raid1", 00:20:03.150 "superblock": true, 00:20:03.150 "num_base_bdevs": 4, 00:20:03.150 "num_base_bdevs_discovered": 4, 00:20:03.150 "num_base_bdevs_operational": 4, 00:20:03.150 "base_bdevs_list": [ 00:20:03.150 { 00:20:03.150 "name": "BaseBdev1", 00:20:03.150 "uuid": "80743a68-9a4d-5c93-9a3b-b77e0dd89ad6", 00:20:03.150 "is_configured": true, 00:20:03.150 "data_offset": 2048, 00:20:03.150 "data_size": 63488 00:20:03.150 }, 00:20:03.150 { 00:20:03.150 "name": "BaseBdev2", 00:20:03.150 "uuid": "43b2cdcc-637f-5c05-aa40-d91aa2555d23", 00:20:03.150 "is_configured": true, 00:20:03.150 "data_offset": 2048, 00:20:03.150 "data_size": 63488 00:20:03.150 }, 00:20:03.150 { 00:20:03.150 "name": "BaseBdev3", 00:20:03.150 "uuid": "cd1470b7-31c8-577a-aae3-ce8ace3188c1", 00:20:03.150 "is_configured": true, 00:20:03.150 "data_offset": 2048, 00:20:03.150 "data_size": 63488 00:20:03.150 }, 00:20:03.150 { 00:20:03.150 "name": "BaseBdev4", 00:20:03.150 "uuid": "ddabd75f-02f2-5e2f-9544-6ddc0e7ac664", 00:20:03.150 "is_configured": true, 00:20:03.150 "data_offset": 2048, 00:20:03.150 "data_size": 63488 00:20:03.150 } 00:20:03.150 ] 00:20:03.150 }' 00:20:03.150 05:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.150 05:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.408 05:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:03.408 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.408 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.408 [2024-11-20 05:30:35.197494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.409 [2024-11-20 05:30:35.197532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.409 [2024-11-20 05:30:35.199886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.409 [2024-11-20 05:30:35.199940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.409 [2024-11-20 05:30:35.200047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.409 [2024-11-20 05:30:35.200058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:20:03.409 { 00:20:03.409 "results": [ 00:20:03.409 { 00:20:03.409 "job": "raid_bdev1", 00:20:03.409 "core_mask": "0x1", 00:20:03.409 "workload": "randrw", 00:20:03.409 "percentage": 50, 00:20:03.409 "status": "finished", 00:20:03.409 "queue_depth": 1, 00:20:03.409 "io_size": 131072, 00:20:03.409 "runtime": 1.253476, 00:20:03.409 "iops": 12429.436223748999, 00:20:03.409 "mibps": 1553.6795279686248, 00:20:03.409 "io_failed": 0, 00:20:03.409 "io_timeout": 0, 00:20:03.409 "avg_latency_us": 78.00294657845365, 00:20:03.409 "min_latency_us": 22.84307692307692, 00:20:03.409 "max_latency_us": 1411.5446153846153 00:20:03.409 } 00:20:03.409 ], 00:20:03.409 "core_count": 1 00:20:03.409 } 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73063 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73063 ']' 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73063 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73063 00:20:03.409 killing process with pid 73063 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73063' 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73063 00:20:03.409 [2024-11-20 05:30:35.227437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.409 05:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73063 00:20:03.667 [2024-11-20 05:30:35.391163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jYeUKv3C2j 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:04.233 ************************************ 00:20:04.233 END TEST raid_read_error_test 00:20:04.233 ************************************ 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:04.233 00:20:04.233 real 0m3.596s 00:20:04.233 user 0m4.232s 00:20:04.233 sys 0m0.461s 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:04.233 05:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.233 05:30:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:20:04.233 05:30:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:04.233 05:30:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:04.233 05:30:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:04.233 ************************************ 00:20:04.233 START TEST raid_write_error_test 00:20:04.233 ************************************ 00:20:04.233 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:20:04.233 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:20:04.233 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:20:04.233 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:20:04.233 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:04.233 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:04.491 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GbSDRS8kce 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73192 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73192 00:20:04.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73192 ']' 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.492 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:04.492 [2024-11-20 05:30:36.143699] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:20:04.492 [2024-11-20 05:30:36.143959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73192 ] 00:20:04.492 [2024-11-20 05:30:36.301264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.750 [2024-11-20 05:30:36.400709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.750 [2024-11-20 05:30:36.521468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.750 [2024-11-20 05:30:36.521512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.318 BaseBdev1_malloc 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.318 true 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.318 [2024-11-20 05:30:36.938434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:05.318 [2024-11-20 05:30:36.938493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.318 [2024-11-20 05:30:36.938511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:05.318 [2024-11-20 05:30:36.938520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.318 [2024-11-20 05:30:36.940345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.318 [2024-11-20 05:30:36.940388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:05.318 BaseBdev1 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.318 BaseBdev2_malloc 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.318 true 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.318 [2024-11-20 05:30:36.979890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:05.318 [2024-11-20 05:30:36.980071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.318 [2024-11-20 05:30:36.980092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:05.318 [2024-11-20 05:30:36.980101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.318 [2024-11-20 05:30:36.981941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.318 [2024-11-20 05:30:36.981973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:05.318 BaseBdev2 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.318 05:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.318 BaseBdev3_malloc 00:20:05.318 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.318 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:05.318 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.318 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.319 true 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.319 [2024-11-20 05:30:37.044443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:05.319 [2024-11-20 05:30:37.044495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.319 [2024-11-20 05:30:37.044510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:05.319 [2024-11-20 05:30:37.044519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.319 [2024-11-20 05:30:37.046334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.319 [2024-11-20 05:30:37.046378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:05.319 BaseBdev3 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.319 BaseBdev4_malloc 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.319 true 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.319 [2024-11-20 05:30:37.085715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:05.319 [2024-11-20 05:30:37.085761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.319 [2024-11-20 05:30:37.085776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:05.319 [2024-11-20 05:30:37.085784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.319 [2024-11-20 05:30:37.087602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.319 [2024-11-20 05:30:37.087753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:05.319 BaseBdev4 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.319 [2024-11-20 05:30:37.093773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:05.319 [2024-11-20 05:30:37.095389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.319 [2024-11-20 05:30:37.095453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:05.319 [2024-11-20 05:30:37.095506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:05.319 [2024-11-20 05:30:37.095692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:20:05.319 [2024-11-20 05:30:37.095710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:05.319 [2024-11-20 05:30:37.095914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:20:05.319 [2024-11-20 05:30:37.096043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:20:05.319 [2024-11-20 05:30:37.096051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:20:05.319 [2024-11-20 05:30:37.096169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.319 "name": "raid_bdev1", 00:20:05.319 "uuid": "2749b106-f4e8-4d14-9bcc-592051364796", 00:20:05.319 "strip_size_kb": 0, 00:20:05.319 "state": "online", 00:20:05.319 "raid_level": "raid1", 00:20:05.319 "superblock": true, 00:20:05.319 "num_base_bdevs": 4, 00:20:05.319 "num_base_bdevs_discovered": 4, 00:20:05.319 "num_base_bdevs_operational": 4, 00:20:05.319 "base_bdevs_list": [ 00:20:05.319 { 00:20:05.319 "name": "BaseBdev1", 00:20:05.319 "uuid": "b00ab491-0566-50b0-8180-5e7e090e53bc", 00:20:05.319 "is_configured": true, 00:20:05.319 "data_offset": 2048, 00:20:05.319 "data_size": 63488 00:20:05.319 }, 00:20:05.319 { 00:20:05.319 "name": "BaseBdev2", 00:20:05.319 "uuid": "11dce69c-0700-51d1-8a19-71de3ae08827", 00:20:05.319 "is_configured": true, 00:20:05.319 "data_offset": 2048, 00:20:05.319 "data_size": 63488 00:20:05.319 }, 00:20:05.319 { 00:20:05.319 "name": "BaseBdev3", 00:20:05.319 "uuid": "fbb490b2-6ede-5aa0-bf08-ebf3c98bdcbf", 00:20:05.319 "is_configured": true, 00:20:05.319 "data_offset": 2048, 00:20:05.319 "data_size": 63488 00:20:05.319 }, 00:20:05.319 { 00:20:05.319 "name": "BaseBdev4", 00:20:05.319 "uuid": "741a42f9-53b7-5480-83e2-ffec66560c25", 00:20:05.319 "is_configured": true, 00:20:05.319 "data_offset": 2048, 00:20:05.319 "data_size": 63488 00:20:05.319 } 00:20:05.319 ] 00:20:05.319 }' 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.319 05:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.884 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:05.884 05:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:05.884 [2024-11-20 05:30:37.518728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.816 [2024-11-20 05:30:38.437086] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:20:06.816 [2024-11-20 05:30:38.437155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:06.816 [2024-11-20 05:30:38.437392] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.816 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.816 "name": "raid_bdev1", 00:20:06.816 "uuid": "2749b106-f4e8-4d14-9bcc-592051364796", 00:20:06.816 "strip_size_kb": 0, 00:20:06.816 "state": "online", 00:20:06.816 "raid_level": "raid1", 00:20:06.816 "superblock": true, 00:20:06.816 "num_base_bdevs": 4, 00:20:06.816 "num_base_bdevs_discovered": 3, 00:20:06.816 "num_base_bdevs_operational": 3, 00:20:06.817 "base_bdevs_list": [ 00:20:06.817 { 00:20:06.817 "name": null, 00:20:06.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.817 "is_configured": false, 00:20:06.817 "data_offset": 0, 00:20:06.817 "data_size": 63488 00:20:06.817 }, 00:20:06.817 { 00:20:06.817 "name": "BaseBdev2", 00:20:06.817 "uuid": "11dce69c-0700-51d1-8a19-71de3ae08827", 00:20:06.817 "is_configured": true, 00:20:06.817 "data_offset": 2048, 00:20:06.817 "data_size": 63488 00:20:06.817 }, 00:20:06.817 { 00:20:06.817 "name": "BaseBdev3", 00:20:06.817 "uuid": "fbb490b2-6ede-5aa0-bf08-ebf3c98bdcbf", 00:20:06.817 "is_configured": true, 00:20:06.817 "data_offset": 2048, 00:20:06.817 "data_size": 63488 00:20:06.817 }, 00:20:06.817 { 00:20:06.817 "name": "BaseBdev4", 00:20:06.817 "uuid": "741a42f9-53b7-5480-83e2-ffec66560c25", 00:20:06.817 "is_configured": true, 00:20:06.817 "data_offset": 2048, 00:20:06.817 "data_size": 63488 00:20:06.817 } 00:20:06.817 ] 00:20:06.817 }' 00:20:06.817 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.817 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.075 [2024-11-20 05:30:38.756693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.075 [2024-11-20 05:30:38.756864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.075 [2024-11-20 05:30:38.759319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.075 [2024-11-20 05:30:38.759369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.075 [2024-11-20 05:30:38.759460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.075 [2024-11-20 05:30:38.759471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:20:07.075 { 00:20:07.075 "results": [ 00:20:07.075 { 00:20:07.075 "job": "raid_bdev1", 00:20:07.075 "core_mask": "0x1", 00:20:07.075 "workload": "randrw", 00:20:07.075 "percentage": 50, 00:20:07.075 "status": "finished", 00:20:07.075 "queue_depth": 1, 00:20:07.075 "io_size": 131072, 00:20:07.075 "runtime": 1.236323, 00:20:07.075 "iops": 13245.729473608433, 00:20:07.075 "mibps": 1655.7161842010541, 00:20:07.075 "io_failed": 0, 00:20:07.075 "io_timeout": 0, 00:20:07.075 "avg_latency_us": 73.01186426665664, 00:20:07.075 "min_latency_us": 22.44923076923077, 00:20:07.075 "max_latency_us": 1367.4338461538462 00:20:07.075 } 00:20:07.075 ], 00:20:07.075 "core_count": 1 00:20:07.075 } 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73192 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73192 ']' 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73192 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73192 00:20:07.075 killing process with pid 73192 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73192' 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73192 00:20:07.075 05:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73192 00:20:07.075 [2024-11-20 05:30:38.784721] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:07.333 [2024-11-20 05:30:38.954488] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GbSDRS8kce 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:07.908 00:20:07.908 real 0m3.526s 00:20:07.908 user 0m4.120s 00:20:07.908 sys 0m0.415s 00:20:07.908 ************************************ 00:20:07.908 END TEST raid_write_error_test 00:20:07.908 ************************************ 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:07.908 05:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.908 05:30:39 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:20:07.908 05:30:39 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:20:07.908 05:30:39 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:20:07.908 05:30:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:07.908 05:30:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:07.908 05:30:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.908 ************************************ 00:20:07.908 START TEST raid_rebuild_test 00:20:07.908 ************************************ 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=73330 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 73330 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 73330 ']' 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.908 05:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:07.908 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:07.908 Zero copy mechanism will not be used. 00:20:07.908 [2024-11-20 05:30:39.695628] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:20:07.908 [2024-11-20 05:30:39.695768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73330 ] 00:20:08.166 [2024-11-20 05:30:39.850749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.166 [2024-11-20 05:30:39.949084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.424 [2024-11-20 05:30:40.070103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.424 [2024-11-20 05:30:40.070148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.991 BaseBdev1_malloc 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.991 [2024-11-20 05:30:40.579117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:08.991 [2024-11-20 05:30:40.579184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.991 [2024-11-20 05:30:40.579204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:08.991 [2024-11-20 05:30:40.579215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.991 [2024-11-20 05:30:40.581106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.991 [2024-11-20 05:30:40.581138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:08.991 BaseBdev1 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.991 BaseBdev2_malloc 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.991 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.991 [2024-11-20 05:30:40.612422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:08.991 [2024-11-20 05:30:40.612610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.991 [2024-11-20 05:30:40.612631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:08.991 [2024-11-20 05:30:40.612640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.991 [2024-11-20 05:30:40.614459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.992 [2024-11-20 05:30:40.614485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:08.992 BaseBdev2 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.992 spare_malloc 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.992 spare_delay 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.992 [2024-11-20 05:30:40.676451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:08.992 [2024-11-20 05:30:40.676643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.992 [2024-11-20 05:30:40.676663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:08.992 [2024-11-20 05:30:40.676673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.992 [2024-11-20 05:30:40.678517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.992 [2024-11-20 05:30:40.678546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:08.992 spare 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.992 [2024-11-20 05:30:40.684501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.992 [2024-11-20 05:30:40.686074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.992 [2024-11-20 05:30:40.686143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:08.992 [2024-11-20 05:30:40.686154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:08.992 [2024-11-20 05:30:40.686382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:08.992 [2024-11-20 05:30:40.686499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:08.992 [2024-11-20 05:30:40.686508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:08.992 [2024-11-20 05:30:40.686622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.992 "name": "raid_bdev1", 00:20:08.992 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:08.992 "strip_size_kb": 0, 00:20:08.992 "state": "online", 00:20:08.992 "raid_level": "raid1", 00:20:08.992 "superblock": false, 00:20:08.992 "num_base_bdevs": 2, 00:20:08.992 "num_base_bdevs_discovered": 2, 00:20:08.992 "num_base_bdevs_operational": 2, 00:20:08.992 "base_bdevs_list": [ 00:20:08.992 { 00:20:08.992 "name": "BaseBdev1", 00:20:08.992 "uuid": "a7d90005-9dc1-54c7-a3a4-e09f9e86469c", 00:20:08.992 "is_configured": true, 00:20:08.992 "data_offset": 0, 00:20:08.992 "data_size": 65536 00:20:08.992 }, 00:20:08.992 { 00:20:08.992 "name": "BaseBdev2", 00:20:08.992 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:08.992 "is_configured": true, 00:20:08.992 "data_offset": 0, 00:20:08.992 "data_size": 65536 00:20:08.992 } 00:20:08.992 ] 00:20:08.992 }' 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.992 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.251 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:09.251 05:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.251 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.251 05:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.251 [2024-11-20 05:30:41.004869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:09.251 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:09.509 [2024-11-20 05:30:41.212721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:09.509 /dev/nbd0 00:20:09.509 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:09.509 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:09.509 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:09.509 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:09.509 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.510 1+0 records in 00:20:09.510 1+0 records out 00:20:09.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200612 s, 20.4 MB/s 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:09.510 05:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:13.690 65536+0 records in 00:20:13.690 65536+0 records out 00:20:13.690 33554432 bytes (34 MB, 32 MiB) copied, 4.1532 s, 8.1 MB/s 00:20:13.690 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:13.690 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:13.690 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:13.690 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:13.690 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:13.691 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:13.691 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:13.948 [2024-11-20 05:30:45.629185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.948 [2024-11-20 05:30:45.657254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.948 "name": "raid_bdev1", 00:20:13.948 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:13.948 "strip_size_kb": 0, 00:20:13.948 "state": "online", 00:20:13.948 "raid_level": "raid1", 00:20:13.948 "superblock": false, 00:20:13.948 "num_base_bdevs": 2, 00:20:13.948 "num_base_bdevs_discovered": 1, 00:20:13.948 "num_base_bdevs_operational": 1, 00:20:13.948 "base_bdevs_list": [ 00:20:13.948 { 00:20:13.948 "name": null, 00:20:13.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.948 "is_configured": false, 00:20:13.948 "data_offset": 0, 00:20:13.948 "data_size": 65536 00:20:13.948 }, 00:20:13.948 { 00:20:13.948 "name": "BaseBdev2", 00:20:13.948 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:13.948 "is_configured": true, 00:20:13.948 "data_offset": 0, 00:20:13.948 "data_size": 65536 00:20:13.948 } 00:20:13.948 ] 00:20:13.948 }' 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.948 05:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.205 05:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:14.205 05:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.205 05:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.205 [2024-11-20 05:30:45.993341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.205 [2024-11-20 05:30:46.003564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:20:14.205 05:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.205 05:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:14.205 [2024-11-20 05:30:46.005224] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.577 "name": "raid_bdev1", 00:20:15.577 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:15.577 "strip_size_kb": 0, 00:20:15.577 "state": "online", 00:20:15.577 "raid_level": "raid1", 00:20:15.577 "superblock": false, 00:20:15.577 "num_base_bdevs": 2, 00:20:15.577 "num_base_bdevs_discovered": 2, 00:20:15.577 "num_base_bdevs_operational": 2, 00:20:15.577 "process": { 00:20:15.577 "type": "rebuild", 00:20:15.577 "target": "spare", 00:20:15.577 "progress": { 00:20:15.577 "blocks": 20480, 00:20:15.577 "percent": 31 00:20:15.577 } 00:20:15.577 }, 00:20:15.577 "base_bdevs_list": [ 00:20:15.577 { 00:20:15.577 "name": "spare", 00:20:15.577 "uuid": "de7a35a9-564a-5f58-bd83-e13f7b110c07", 00:20:15.577 "is_configured": true, 00:20:15.577 "data_offset": 0, 00:20:15.577 "data_size": 65536 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "name": "BaseBdev2", 00:20:15.577 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:15.577 "is_configured": true, 00:20:15.577 "data_offset": 0, 00:20:15.577 "data_size": 65536 00:20:15.577 } 00:20:15.577 ] 00:20:15.577 }' 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.577 [2024-11-20 05:30:47.107440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.577 [2024-11-20 05:30:47.111957] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:15.577 [2024-11-20 05:30:47.112011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.577 [2024-11-20 05:30:47.112024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.577 [2024-11-20 05:30:47.112033] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:15.577 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.578 "name": "raid_bdev1", 00:20:15.578 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:15.578 "strip_size_kb": 0, 00:20:15.578 "state": "online", 00:20:15.578 "raid_level": "raid1", 00:20:15.578 "superblock": false, 00:20:15.578 "num_base_bdevs": 2, 00:20:15.578 "num_base_bdevs_discovered": 1, 00:20:15.578 "num_base_bdevs_operational": 1, 00:20:15.578 "base_bdevs_list": [ 00:20:15.578 { 00:20:15.578 "name": null, 00:20:15.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.578 "is_configured": false, 00:20:15.578 "data_offset": 0, 00:20:15.578 "data_size": 65536 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "name": "BaseBdev2", 00:20:15.578 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:15.578 "is_configured": true, 00:20:15.578 "data_offset": 0, 00:20:15.578 "data_size": 65536 00:20:15.578 } 00:20:15.578 ] 00:20:15.578 }' 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.578 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.836 "name": "raid_bdev1", 00:20:15.836 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:15.836 "strip_size_kb": 0, 00:20:15.836 "state": "online", 00:20:15.836 "raid_level": "raid1", 00:20:15.836 "superblock": false, 00:20:15.836 "num_base_bdevs": 2, 00:20:15.836 "num_base_bdevs_discovered": 1, 00:20:15.836 "num_base_bdevs_operational": 1, 00:20:15.836 "base_bdevs_list": [ 00:20:15.836 { 00:20:15.836 "name": null, 00:20:15.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.836 "is_configured": false, 00:20:15.836 "data_offset": 0, 00:20:15.836 "data_size": 65536 00:20:15.836 }, 00:20:15.836 { 00:20:15.836 "name": "BaseBdev2", 00:20:15.836 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:15.836 "is_configured": true, 00:20:15.836 "data_offset": 0, 00:20:15.836 "data_size": 65536 00:20:15.836 } 00:20:15.836 ] 00:20:15.836 }' 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.836 [2024-11-20 05:30:47.543815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.836 [2024-11-20 05:30:47.553273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.836 05:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:15.836 [2024-11-20 05:30:47.554921] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.770 "name": "raid_bdev1", 00:20:16.770 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:16.770 "strip_size_kb": 0, 00:20:16.770 "state": "online", 00:20:16.770 "raid_level": "raid1", 00:20:16.770 "superblock": false, 00:20:16.770 "num_base_bdevs": 2, 00:20:16.770 "num_base_bdevs_discovered": 2, 00:20:16.770 "num_base_bdevs_operational": 2, 00:20:16.770 "process": { 00:20:16.770 "type": "rebuild", 00:20:16.770 "target": "spare", 00:20:16.770 "progress": { 00:20:16.770 "blocks": 20480, 00:20:16.770 "percent": 31 00:20:16.770 } 00:20:16.770 }, 00:20:16.770 "base_bdevs_list": [ 00:20:16.770 { 00:20:16.770 "name": "spare", 00:20:16.770 "uuid": "de7a35a9-564a-5f58-bd83-e13f7b110c07", 00:20:16.770 "is_configured": true, 00:20:16.770 "data_offset": 0, 00:20:16.770 "data_size": 65536 00:20:16.770 }, 00:20:16.770 { 00:20:16.770 "name": "BaseBdev2", 00:20:16.770 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:16.770 "is_configured": true, 00:20:16.770 "data_offset": 0, 00:20:16.770 "data_size": 65536 00:20:16.770 } 00:20:16.770 ] 00:20:16.770 }' 00:20:16.770 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=283 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.028 "name": "raid_bdev1", 00:20:17.028 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:17.028 "strip_size_kb": 0, 00:20:17.028 "state": "online", 00:20:17.028 "raid_level": "raid1", 00:20:17.028 "superblock": false, 00:20:17.028 "num_base_bdevs": 2, 00:20:17.028 "num_base_bdevs_discovered": 2, 00:20:17.028 "num_base_bdevs_operational": 2, 00:20:17.028 "process": { 00:20:17.028 "type": "rebuild", 00:20:17.028 "target": "spare", 00:20:17.028 "progress": { 00:20:17.028 "blocks": 20480, 00:20:17.028 "percent": 31 00:20:17.028 } 00:20:17.028 }, 00:20:17.028 "base_bdevs_list": [ 00:20:17.028 { 00:20:17.028 "name": "spare", 00:20:17.028 "uuid": "de7a35a9-564a-5f58-bd83-e13f7b110c07", 00:20:17.028 "is_configured": true, 00:20:17.028 "data_offset": 0, 00:20:17.028 "data_size": 65536 00:20:17.028 }, 00:20:17.028 { 00:20:17.028 "name": "BaseBdev2", 00:20:17.028 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:17.028 "is_configured": true, 00:20:17.028 "data_offset": 0, 00:20:17.028 "data_size": 65536 00:20:17.028 } 00:20:17.028 ] 00:20:17.028 }' 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.028 05:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.961 05:30:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.219 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.219 "name": "raid_bdev1", 00:20:18.219 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:18.219 "strip_size_kb": 0, 00:20:18.219 "state": "online", 00:20:18.219 "raid_level": "raid1", 00:20:18.219 "superblock": false, 00:20:18.219 "num_base_bdevs": 2, 00:20:18.219 "num_base_bdevs_discovered": 2, 00:20:18.219 "num_base_bdevs_operational": 2, 00:20:18.219 "process": { 00:20:18.219 "type": "rebuild", 00:20:18.219 "target": "spare", 00:20:18.219 "progress": { 00:20:18.219 "blocks": 45056, 00:20:18.219 "percent": 68 00:20:18.219 } 00:20:18.219 }, 00:20:18.219 "base_bdevs_list": [ 00:20:18.219 { 00:20:18.219 "name": "spare", 00:20:18.219 "uuid": "de7a35a9-564a-5f58-bd83-e13f7b110c07", 00:20:18.219 "is_configured": true, 00:20:18.219 "data_offset": 0, 00:20:18.219 "data_size": 65536 00:20:18.219 }, 00:20:18.219 { 00:20:18.219 "name": "BaseBdev2", 00:20:18.219 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:18.219 "is_configured": true, 00:20:18.219 "data_offset": 0, 00:20:18.219 "data_size": 65536 00:20:18.219 } 00:20:18.219 ] 00:20:18.219 }' 00:20:18.219 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.219 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.219 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.219 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.219 05:30:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:19.155 [2024-11-20 05:30:50.773085] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:19.155 [2024-11-20 05:30:50.773174] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:19.155 [2024-11-20 05:30:50.773223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.155 "name": "raid_bdev1", 00:20:19.155 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:19.155 "strip_size_kb": 0, 00:20:19.155 "state": "online", 00:20:19.155 "raid_level": "raid1", 00:20:19.155 "superblock": false, 00:20:19.155 "num_base_bdevs": 2, 00:20:19.155 "num_base_bdevs_discovered": 2, 00:20:19.155 "num_base_bdevs_operational": 2, 00:20:19.155 "base_bdevs_list": [ 00:20:19.155 { 00:20:19.155 "name": "spare", 00:20:19.155 "uuid": "de7a35a9-564a-5f58-bd83-e13f7b110c07", 00:20:19.155 "is_configured": true, 00:20:19.155 "data_offset": 0, 00:20:19.155 "data_size": 65536 00:20:19.155 }, 00:20:19.155 { 00:20:19.155 "name": "BaseBdev2", 00:20:19.155 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:19.155 "is_configured": true, 00:20:19.155 "data_offset": 0, 00:20:19.155 "data_size": 65536 00:20:19.155 } 00:20:19.155 ] 00:20:19.155 }' 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.155 05:30:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.415 05:30:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.415 "name": "raid_bdev1", 00:20:19.415 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:19.415 "strip_size_kb": 0, 00:20:19.415 "state": "online", 00:20:19.415 "raid_level": "raid1", 00:20:19.415 "superblock": false, 00:20:19.415 "num_base_bdevs": 2, 00:20:19.415 "num_base_bdevs_discovered": 2, 00:20:19.415 "num_base_bdevs_operational": 2, 00:20:19.415 "base_bdevs_list": [ 00:20:19.415 { 00:20:19.415 "name": "spare", 00:20:19.415 "uuid": "de7a35a9-564a-5f58-bd83-e13f7b110c07", 00:20:19.415 "is_configured": true, 00:20:19.415 "data_offset": 0, 00:20:19.415 "data_size": 65536 00:20:19.415 }, 00:20:19.415 { 00:20:19.415 "name": "BaseBdev2", 00:20:19.415 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:19.415 "is_configured": true, 00:20:19.415 "data_offset": 0, 00:20:19.415 "data_size": 65536 00:20:19.415 } 00:20:19.415 ] 00:20:19.415 }' 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.415 "name": "raid_bdev1", 00:20:19.415 "uuid": "24145c50-4e26-48f2-a268-c2cafe05c4bd", 00:20:19.415 "strip_size_kb": 0, 00:20:19.415 "state": "online", 00:20:19.415 "raid_level": "raid1", 00:20:19.415 "superblock": false, 00:20:19.415 "num_base_bdevs": 2, 00:20:19.415 "num_base_bdevs_discovered": 2, 00:20:19.415 "num_base_bdevs_operational": 2, 00:20:19.415 "base_bdevs_list": [ 00:20:19.415 { 00:20:19.415 "name": "spare", 00:20:19.415 "uuid": "de7a35a9-564a-5f58-bd83-e13f7b110c07", 00:20:19.415 "is_configured": true, 00:20:19.415 "data_offset": 0, 00:20:19.415 "data_size": 65536 00:20:19.415 }, 00:20:19.415 { 00:20:19.415 "name": "BaseBdev2", 00:20:19.415 "uuid": "aec6291a-8f9f-5450-86ed-26dd84aef648", 00:20:19.415 "is_configured": true, 00:20:19.415 "data_offset": 0, 00:20:19.415 "data_size": 65536 00:20:19.415 } 00:20:19.415 ] 00:20:19.415 }' 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.415 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.674 [2024-11-20 05:30:51.400832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:19.674 [2024-11-20 05:30:51.400869] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.674 [2024-11-20 05:30:51.400955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.674 [2024-11-20 05:30:51.401023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.674 [2024-11-20 05:30:51.401032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:19.674 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:19.675 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:19.934 /dev/nbd0 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.934 1+0 records in 00:20:19.934 1+0 records out 00:20:19.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240118 s, 17.1 MB/s 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:19.934 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:20.193 /dev/nbd1 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:20.193 1+0 records in 00:20:20.193 1+0 records out 00:20:20.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034757 s, 11.8 MB/s 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:20.193 05:30:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:20.486 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 73330 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 73330 ']' 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 73330 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73330 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73330' 00:20:20.763 killing process with pid 73330 00:20:20.763 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 73330 00:20:20.763 Received shutdown signal, test time was about 60.000000 seconds 00:20:20.763 00:20:20.763 Latency(us) 00:20:20.763 [2024-11-20T05:30:52.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.764 [2024-11-20T05:30:52.599Z] =================================================================================================================== 00:20:20.764 [2024-11-20T05:30:52.599Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.764 [2024-11-20 05:30:52.499071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:20.764 05:30:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 73330 00:20:21.022 [2024-11-20 05:30:52.651666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:21.588 00:20:21.588 real 0m13.630s 00:20:21.588 user 0m15.034s 00:20:21.588 sys 0m2.499s 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:21.588 ************************************ 00:20:21.588 END TEST raid_rebuild_test 00:20:21.588 ************************************ 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.588 05:30:53 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:20:21.588 05:30:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:21.588 05:30:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:21.588 05:30:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.588 ************************************ 00:20:21.588 START TEST raid_rebuild_test_sb 00:20:21.588 ************************************ 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=73726 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 73726 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73726 ']' 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.588 05:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:21.588 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:21.588 Zero copy mechanism will not be used. 00:20:21.588 [2024-11-20 05:30:53.370801] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:20:21.588 [2024-11-20 05:30:53.370916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73726 ] 00:20:21.846 [2024-11-20 05:30:53.522795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.846 [2024-11-20 05:30:53.625592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.103 [2024-11-20 05:30:53.747001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.103 [2024-11-20 05:30:53.747049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.669 BaseBdev1_malloc 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.669 [2024-11-20 05:30:54.262288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:22.669 [2024-11-20 05:30:54.262354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.669 [2024-11-20 05:30:54.262383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:22.669 [2024-11-20 05:30:54.262393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.669 [2024-11-20 05:30:54.264255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.669 [2024-11-20 05:30:54.264288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:22.669 BaseBdev1 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.669 BaseBdev2_malloc 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.669 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.669 [2024-11-20 05:30:54.295570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:22.669 [2024-11-20 05:30:54.295617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.669 [2024-11-20 05:30:54.295632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:22.669 [2024-11-20 05:30:54.295641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.669 [2024-11-20 05:30:54.297398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.669 [2024-11-20 05:30:54.297427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:22.669 BaseBdev2 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.670 spare_malloc 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.670 spare_delay 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.670 [2024-11-20 05:30:54.357995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:22.670 [2024-11-20 05:30:54.358055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.670 [2024-11-20 05:30:54.358071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:22.670 [2024-11-20 05:30:54.358081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.670 [2024-11-20 05:30:54.359963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.670 [2024-11-20 05:30:54.359995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:22.670 spare 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.670 [2024-11-20 05:30:54.366049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.670 [2024-11-20 05:30:54.367641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:22.670 [2024-11-20 05:30:54.367785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:22.670 [2024-11-20 05:30:54.367805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:22.670 [2024-11-20 05:30:54.368021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:22.670 [2024-11-20 05:30:54.368153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:22.670 [2024-11-20 05:30:54.368165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:22.670 [2024-11-20 05:30:54.368278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.670 "name": "raid_bdev1", 00:20:22.670 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:22.670 "strip_size_kb": 0, 00:20:22.670 "state": "online", 00:20:22.670 "raid_level": "raid1", 00:20:22.670 "superblock": true, 00:20:22.670 "num_base_bdevs": 2, 00:20:22.670 "num_base_bdevs_discovered": 2, 00:20:22.670 "num_base_bdevs_operational": 2, 00:20:22.670 "base_bdevs_list": [ 00:20:22.670 { 00:20:22.670 "name": "BaseBdev1", 00:20:22.670 "uuid": "5bab2b01-ab28-5f83-af93-e4b2afef18d1", 00:20:22.670 "is_configured": true, 00:20:22.670 "data_offset": 2048, 00:20:22.670 "data_size": 63488 00:20:22.670 }, 00:20:22.670 { 00:20:22.670 "name": "BaseBdev2", 00:20:22.670 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:22.670 "is_configured": true, 00:20:22.670 "data_offset": 2048, 00:20:22.670 "data_size": 63488 00:20:22.670 } 00:20:22.670 ] 00:20:22.670 }' 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.670 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:22.928 [2024-11-20 05:30:54.702416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:22.928 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:23.187 [2024-11-20 05:30:54.954237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:23.187 /dev/nbd0 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.187 1+0 records in 00:20:23.187 1+0 records out 00:20:23.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307049 s, 13.3 MB/s 00:20:23.187 05:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.187 05:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:20:23.187 05:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.187 05:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:23.187 05:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:20:23.187 05:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.187 05:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:23.187 05:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:23.187 05:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:23.187 05:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:28.447 63488+0 records in 00:20:28.447 63488+0 records out 00:20:28.447 32505856 bytes (33 MB, 31 MiB) copied, 4.60729 s, 7.1 MB/s 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:28.447 [2024-11-20 05:30:59.828427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.447 [2024-11-20 05:30:59.832514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.447 "name": "raid_bdev1", 00:20:28.447 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:28.447 "strip_size_kb": 0, 00:20:28.447 "state": "online", 00:20:28.447 "raid_level": "raid1", 00:20:28.447 "superblock": true, 00:20:28.447 "num_base_bdevs": 2, 00:20:28.447 "num_base_bdevs_discovered": 1, 00:20:28.447 "num_base_bdevs_operational": 1, 00:20:28.447 "base_bdevs_list": [ 00:20:28.447 { 00:20:28.447 "name": null, 00:20:28.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.447 "is_configured": false, 00:20:28.447 "data_offset": 0, 00:20:28.447 "data_size": 63488 00:20:28.447 }, 00:20:28.447 { 00:20:28.447 "name": "BaseBdev2", 00:20:28.447 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:28.447 "is_configured": true, 00:20:28.447 "data_offset": 2048, 00:20:28.447 "data_size": 63488 00:20:28.447 } 00:20:28.447 ] 00:20:28.447 }' 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.447 05:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.447 05:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:28.447 05:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.447 05:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.447 [2024-11-20 05:31:00.148604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.447 [2024-11-20 05:31:00.158840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:20:28.447 05:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.447 05:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:28.447 [2024-11-20 05:31:00.160633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.381 "name": "raid_bdev1", 00:20:29.381 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:29.381 "strip_size_kb": 0, 00:20:29.381 "state": "online", 00:20:29.381 "raid_level": "raid1", 00:20:29.381 "superblock": true, 00:20:29.381 "num_base_bdevs": 2, 00:20:29.381 "num_base_bdevs_discovered": 2, 00:20:29.381 "num_base_bdevs_operational": 2, 00:20:29.381 "process": { 00:20:29.381 "type": "rebuild", 00:20:29.381 "target": "spare", 00:20:29.381 "progress": { 00:20:29.381 "blocks": 20480, 00:20:29.381 "percent": 32 00:20:29.381 } 00:20:29.381 }, 00:20:29.381 "base_bdevs_list": [ 00:20:29.381 { 00:20:29.381 "name": "spare", 00:20:29.381 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:29.381 "is_configured": true, 00:20:29.381 "data_offset": 2048, 00:20:29.381 "data_size": 63488 00:20:29.381 }, 00:20:29.381 { 00:20:29.381 "name": "BaseBdev2", 00:20:29.381 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:29.381 "is_configured": true, 00:20:29.381 "data_offset": 2048, 00:20:29.381 "data_size": 63488 00:20:29.381 } 00:20:29.381 ] 00:20:29.381 }' 00:20:29.381 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.639 [2024-11-20 05:31:01.262776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:29.639 [2024-11-20 05:31:01.267256] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:29.639 [2024-11-20 05:31:01.267318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.639 [2024-11-20 05:31:01.267332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:29.639 [2024-11-20 05:31:01.267340] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.639 "name": "raid_bdev1", 00:20:29.639 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:29.639 "strip_size_kb": 0, 00:20:29.639 "state": "online", 00:20:29.639 "raid_level": "raid1", 00:20:29.639 "superblock": true, 00:20:29.639 "num_base_bdevs": 2, 00:20:29.639 "num_base_bdevs_discovered": 1, 00:20:29.639 "num_base_bdevs_operational": 1, 00:20:29.639 "base_bdevs_list": [ 00:20:29.639 { 00:20:29.639 "name": null, 00:20:29.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.639 "is_configured": false, 00:20:29.639 "data_offset": 0, 00:20:29.639 "data_size": 63488 00:20:29.639 }, 00:20:29.639 { 00:20:29.639 "name": "BaseBdev2", 00:20:29.639 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:29.639 "is_configured": true, 00:20:29.639 "data_offset": 2048, 00:20:29.639 "data_size": 63488 00:20:29.639 } 00:20:29.639 ] 00:20:29.639 }' 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.639 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.897 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.897 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.897 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.897 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.897 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.897 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.897 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.897 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.898 "name": "raid_bdev1", 00:20:29.898 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:29.898 "strip_size_kb": 0, 00:20:29.898 "state": "online", 00:20:29.898 "raid_level": "raid1", 00:20:29.898 "superblock": true, 00:20:29.898 "num_base_bdevs": 2, 00:20:29.898 "num_base_bdevs_discovered": 1, 00:20:29.898 "num_base_bdevs_operational": 1, 00:20:29.898 "base_bdevs_list": [ 00:20:29.898 { 00:20:29.898 "name": null, 00:20:29.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.898 "is_configured": false, 00:20:29.898 "data_offset": 0, 00:20:29.898 "data_size": 63488 00:20:29.898 }, 00:20:29.898 { 00:20:29.898 "name": "BaseBdev2", 00:20:29.898 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:29.898 "is_configured": true, 00:20:29.898 "data_offset": 2048, 00:20:29.898 "data_size": 63488 00:20:29.898 } 00:20:29.898 ] 00:20:29.898 }' 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.898 [2024-11-20 05:31:01.687453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.898 [2024-11-20 05:31:01.696757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.898 05:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:29.898 [2024-11-20 05:31:01.698447] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.274 "name": "raid_bdev1", 00:20:31.274 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:31.274 "strip_size_kb": 0, 00:20:31.274 "state": "online", 00:20:31.274 "raid_level": "raid1", 00:20:31.274 "superblock": true, 00:20:31.274 "num_base_bdevs": 2, 00:20:31.274 "num_base_bdevs_discovered": 2, 00:20:31.274 "num_base_bdevs_operational": 2, 00:20:31.274 "process": { 00:20:31.274 "type": "rebuild", 00:20:31.274 "target": "spare", 00:20:31.274 "progress": { 00:20:31.274 "blocks": 20480, 00:20:31.274 "percent": 32 00:20:31.274 } 00:20:31.274 }, 00:20:31.274 "base_bdevs_list": [ 00:20:31.274 { 00:20:31.274 "name": "spare", 00:20:31.274 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:31.274 "is_configured": true, 00:20:31.274 "data_offset": 2048, 00:20:31.274 "data_size": 63488 00:20:31.274 }, 00:20:31.274 { 00:20:31.274 "name": "BaseBdev2", 00:20:31.274 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:31.274 "is_configured": true, 00:20:31.274 "data_offset": 2048, 00:20:31.274 "data_size": 63488 00:20:31.274 } 00:20:31.274 ] 00:20:31.274 }' 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:31.274 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=297 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.274 05:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.275 05:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.275 05:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.275 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.275 "name": "raid_bdev1", 00:20:31.275 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:31.275 "strip_size_kb": 0, 00:20:31.275 "state": "online", 00:20:31.275 "raid_level": "raid1", 00:20:31.275 "superblock": true, 00:20:31.275 "num_base_bdevs": 2, 00:20:31.275 "num_base_bdevs_discovered": 2, 00:20:31.275 "num_base_bdevs_operational": 2, 00:20:31.275 "process": { 00:20:31.275 "type": "rebuild", 00:20:31.275 "target": "spare", 00:20:31.275 "progress": { 00:20:31.275 "blocks": 20480, 00:20:31.275 "percent": 32 00:20:31.275 } 00:20:31.275 }, 00:20:31.275 "base_bdevs_list": [ 00:20:31.275 { 00:20:31.275 "name": "spare", 00:20:31.275 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:31.275 "is_configured": true, 00:20:31.275 "data_offset": 2048, 00:20:31.275 "data_size": 63488 00:20:31.275 }, 00:20:31.275 { 00:20:31.275 "name": "BaseBdev2", 00:20:31.275 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:31.275 "is_configured": true, 00:20:31.275 "data_offset": 2048, 00:20:31.275 "data_size": 63488 00:20:31.275 } 00:20:31.275 ] 00:20:31.275 }' 00:20:31.275 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.275 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.275 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.275 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.275 05:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.212 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.212 "name": "raid_bdev1", 00:20:32.212 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:32.212 "strip_size_kb": 0, 00:20:32.212 "state": "online", 00:20:32.212 "raid_level": "raid1", 00:20:32.212 "superblock": true, 00:20:32.212 "num_base_bdevs": 2, 00:20:32.212 "num_base_bdevs_discovered": 2, 00:20:32.212 "num_base_bdevs_operational": 2, 00:20:32.212 "process": { 00:20:32.212 "type": "rebuild", 00:20:32.212 "target": "spare", 00:20:32.212 "progress": { 00:20:32.212 "blocks": 43008, 00:20:32.212 "percent": 67 00:20:32.212 } 00:20:32.212 }, 00:20:32.212 "base_bdevs_list": [ 00:20:32.212 { 00:20:32.212 "name": "spare", 00:20:32.212 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:32.212 "is_configured": true, 00:20:32.212 "data_offset": 2048, 00:20:32.212 "data_size": 63488 00:20:32.212 }, 00:20:32.213 { 00:20:32.213 "name": "BaseBdev2", 00:20:32.213 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:32.213 "is_configured": true, 00:20:32.213 "data_offset": 2048, 00:20:32.213 "data_size": 63488 00:20:32.213 } 00:20:32.213 ] 00:20:32.213 }' 00:20:32.213 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.213 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.213 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.213 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.213 05:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:33.145 [2024-11-20 05:31:04.816180] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:33.145 [2024-11-20 05:31:04.816269] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:33.145 [2024-11-20 05:31:04.816400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.402 05:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.402 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.402 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.402 "name": "raid_bdev1", 00:20:33.402 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:33.402 "strip_size_kb": 0, 00:20:33.402 "state": "online", 00:20:33.402 "raid_level": "raid1", 00:20:33.402 "superblock": true, 00:20:33.402 "num_base_bdevs": 2, 00:20:33.402 "num_base_bdevs_discovered": 2, 00:20:33.402 "num_base_bdevs_operational": 2, 00:20:33.402 "base_bdevs_list": [ 00:20:33.402 { 00:20:33.402 "name": "spare", 00:20:33.402 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:33.402 "is_configured": true, 00:20:33.402 "data_offset": 2048, 00:20:33.402 "data_size": 63488 00:20:33.402 }, 00:20:33.402 { 00:20:33.402 "name": "BaseBdev2", 00:20:33.402 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:33.402 "is_configured": true, 00:20:33.402 "data_offset": 2048, 00:20:33.402 "data_size": 63488 00:20:33.402 } 00:20:33.402 ] 00:20:33.402 }' 00:20:33.402 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.403 "name": "raid_bdev1", 00:20:33.403 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:33.403 "strip_size_kb": 0, 00:20:33.403 "state": "online", 00:20:33.403 "raid_level": "raid1", 00:20:33.403 "superblock": true, 00:20:33.403 "num_base_bdevs": 2, 00:20:33.403 "num_base_bdevs_discovered": 2, 00:20:33.403 "num_base_bdevs_operational": 2, 00:20:33.403 "base_bdevs_list": [ 00:20:33.403 { 00:20:33.403 "name": "spare", 00:20:33.403 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:33.403 "is_configured": true, 00:20:33.403 "data_offset": 2048, 00:20:33.403 "data_size": 63488 00:20:33.403 }, 00:20:33.403 { 00:20:33.403 "name": "BaseBdev2", 00:20:33.403 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:33.403 "is_configured": true, 00:20:33.403 "data_offset": 2048, 00:20:33.403 "data_size": 63488 00:20:33.403 } 00:20:33.403 ] 00:20:33.403 }' 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.403 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.673 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.673 "name": "raid_bdev1", 00:20:33.673 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:33.673 "strip_size_kb": 0, 00:20:33.673 "state": "online", 00:20:33.673 "raid_level": "raid1", 00:20:33.673 "superblock": true, 00:20:33.673 "num_base_bdevs": 2, 00:20:33.673 "num_base_bdevs_discovered": 2, 00:20:33.673 "num_base_bdevs_operational": 2, 00:20:33.673 "base_bdevs_list": [ 00:20:33.673 { 00:20:33.673 "name": "spare", 00:20:33.673 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:33.673 "is_configured": true, 00:20:33.673 "data_offset": 2048, 00:20:33.674 "data_size": 63488 00:20:33.674 }, 00:20:33.674 { 00:20:33.674 "name": "BaseBdev2", 00:20:33.674 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:33.674 "is_configured": true, 00:20:33.674 "data_offset": 2048, 00:20:33.674 "data_size": 63488 00:20:33.674 } 00:20:33.674 ] 00:20:33.674 }' 00:20:33.674 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.674 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.932 [2024-11-20 05:31:05.511869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.932 [2024-11-20 05:31:05.511903] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.932 [2024-11-20 05:31:05.511979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.932 [2024-11-20 05:31:05.512044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.932 [2024-11-20 05:31:05.512054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:33.932 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:34.189 /dev/nbd0 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.190 1+0 records in 00:20:34.190 1+0 records out 00:20:34.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00190776 s, 2.1 MB/s 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.190 05:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:34.469 /dev/nbd1 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.469 1+0 records in 00:20:34.469 1+0 records out 00:20:34.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003169 s, 12.9 MB/s 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.469 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.749 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.008 [2024-11-20 05:31:06.613159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:35.008 [2024-11-20 05:31:06.613219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.008 [2024-11-20 05:31:06.613241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:35.008 [2024-11-20 05:31:06.613250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.008 [2024-11-20 05:31:06.615287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.008 [2024-11-20 05:31:06.615320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:35.008 [2024-11-20 05:31:06.615421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:35.008 [2024-11-20 05:31:06.615467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.008 [2024-11-20 05:31:06.615588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:35.008 spare 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.008 [2024-11-20 05:31:06.715687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:35.008 [2024-11-20 05:31:06.715741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:35.008 [2024-11-20 05:31:06.716068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:20:35.008 [2024-11-20 05:31:06.716244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:35.008 [2024-11-20 05:31:06.716261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:35.008 [2024-11-20 05:31:06.716438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.008 "name": "raid_bdev1", 00:20:35.008 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:35.008 "strip_size_kb": 0, 00:20:35.008 "state": "online", 00:20:35.008 "raid_level": "raid1", 00:20:35.008 "superblock": true, 00:20:35.008 "num_base_bdevs": 2, 00:20:35.008 "num_base_bdevs_discovered": 2, 00:20:35.008 "num_base_bdevs_operational": 2, 00:20:35.008 "base_bdevs_list": [ 00:20:35.008 { 00:20:35.008 "name": "spare", 00:20:35.008 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:35.008 "is_configured": true, 00:20:35.008 "data_offset": 2048, 00:20:35.008 "data_size": 63488 00:20:35.008 }, 00:20:35.008 { 00:20:35.008 "name": "BaseBdev2", 00:20:35.008 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:35.008 "is_configured": true, 00:20:35.008 "data_offset": 2048, 00:20:35.008 "data_size": 63488 00:20:35.008 } 00:20:35.008 ] 00:20:35.008 }' 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.008 05:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.271 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.530 "name": "raid_bdev1", 00:20:35.530 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:35.530 "strip_size_kb": 0, 00:20:35.530 "state": "online", 00:20:35.530 "raid_level": "raid1", 00:20:35.530 "superblock": true, 00:20:35.530 "num_base_bdevs": 2, 00:20:35.530 "num_base_bdevs_discovered": 2, 00:20:35.530 "num_base_bdevs_operational": 2, 00:20:35.530 "base_bdevs_list": [ 00:20:35.530 { 00:20:35.530 "name": "spare", 00:20:35.530 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:35.530 "is_configured": true, 00:20:35.530 "data_offset": 2048, 00:20:35.530 "data_size": 63488 00:20:35.530 }, 00:20:35.530 { 00:20:35.530 "name": "BaseBdev2", 00:20:35.530 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:35.530 "is_configured": true, 00:20:35.530 "data_offset": 2048, 00:20:35.530 "data_size": 63488 00:20:35.530 } 00:20:35.530 ] 00:20:35.530 }' 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 [2024-11-20 05:31:07.229359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.530 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.531 "name": "raid_bdev1", 00:20:35.531 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:35.531 "strip_size_kb": 0, 00:20:35.531 "state": "online", 00:20:35.531 "raid_level": "raid1", 00:20:35.531 "superblock": true, 00:20:35.531 "num_base_bdevs": 2, 00:20:35.531 "num_base_bdevs_discovered": 1, 00:20:35.531 "num_base_bdevs_operational": 1, 00:20:35.531 "base_bdevs_list": [ 00:20:35.531 { 00:20:35.531 "name": null, 00:20:35.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.531 "is_configured": false, 00:20:35.531 "data_offset": 0, 00:20:35.531 "data_size": 63488 00:20:35.531 }, 00:20:35.531 { 00:20:35.531 "name": "BaseBdev2", 00:20:35.531 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:35.531 "is_configured": true, 00:20:35.531 "data_offset": 2048, 00:20:35.531 "data_size": 63488 00:20:35.531 } 00:20:35.531 ] 00:20:35.531 }' 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.531 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.788 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:35.788 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.788 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.788 [2024-11-20 05:31:07.581429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.788 [2024-11-20 05:31:07.581615] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:35.788 [2024-11-20 05:31:07.581634] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:35.788 [2024-11-20 05:31:07.581669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.788 [2024-11-20 05:31:07.591328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:20:35.788 05:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.788 05:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:35.788 [2024-11-20 05:31:07.593038] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.163 "name": "raid_bdev1", 00:20:37.163 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:37.163 "strip_size_kb": 0, 00:20:37.163 "state": "online", 00:20:37.163 "raid_level": "raid1", 00:20:37.163 "superblock": true, 00:20:37.163 "num_base_bdevs": 2, 00:20:37.163 "num_base_bdevs_discovered": 2, 00:20:37.163 "num_base_bdevs_operational": 2, 00:20:37.163 "process": { 00:20:37.163 "type": "rebuild", 00:20:37.163 "target": "spare", 00:20:37.163 "progress": { 00:20:37.163 "blocks": 20480, 00:20:37.163 "percent": 32 00:20:37.163 } 00:20:37.163 }, 00:20:37.163 "base_bdevs_list": [ 00:20:37.163 { 00:20:37.163 "name": "spare", 00:20:37.163 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:37.163 "is_configured": true, 00:20:37.163 "data_offset": 2048, 00:20:37.163 "data_size": 63488 00:20:37.163 }, 00:20:37.163 { 00:20:37.163 "name": "BaseBdev2", 00:20:37.163 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:37.163 "is_configured": true, 00:20:37.163 "data_offset": 2048, 00:20:37.163 "data_size": 63488 00:20:37.163 } 00:20:37.163 ] 00:20:37.163 }' 00:20:37.163 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.164 [2024-11-20 05:31:08.695564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.164 [2024-11-20 05:31:08.699473] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:37.164 [2024-11-20 05:31:08.699531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.164 [2024-11-20 05:31:08.699544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.164 [2024-11-20 05:31:08.699552] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.164 "name": "raid_bdev1", 00:20:37.164 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:37.164 "strip_size_kb": 0, 00:20:37.164 "state": "online", 00:20:37.164 "raid_level": "raid1", 00:20:37.164 "superblock": true, 00:20:37.164 "num_base_bdevs": 2, 00:20:37.164 "num_base_bdevs_discovered": 1, 00:20:37.164 "num_base_bdevs_operational": 1, 00:20:37.164 "base_bdevs_list": [ 00:20:37.164 { 00:20:37.164 "name": null, 00:20:37.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.164 "is_configured": false, 00:20:37.164 "data_offset": 0, 00:20:37.164 "data_size": 63488 00:20:37.164 }, 00:20:37.164 { 00:20:37.164 "name": "BaseBdev2", 00:20:37.164 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:37.164 "is_configured": true, 00:20:37.164 "data_offset": 2048, 00:20:37.164 "data_size": 63488 00:20:37.164 } 00:20:37.164 ] 00:20:37.164 }' 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.164 05:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.421 05:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:37.421 05:31:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.421 05:31:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.421 [2024-11-20 05:31:09.047011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:37.421 [2024-11-20 05:31:09.047082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.421 [2024-11-20 05:31:09.047105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:37.421 [2024-11-20 05:31:09.047117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.421 [2024-11-20 05:31:09.047561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.421 [2024-11-20 05:31:09.047588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:37.421 [2024-11-20 05:31:09.047674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:37.421 [2024-11-20 05:31:09.047695] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:37.421 [2024-11-20 05:31:09.047704] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:37.421 [2024-11-20 05:31:09.047730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:37.421 [2024-11-20 05:31:09.057162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:20:37.421 spare 00:20:37.421 05:31:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.421 05:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:37.421 [2024-11-20 05:31:09.058851] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:38.353 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.354 "name": "raid_bdev1", 00:20:38.354 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:38.354 "strip_size_kb": 0, 00:20:38.354 "state": "online", 00:20:38.354 "raid_level": "raid1", 00:20:38.354 "superblock": true, 00:20:38.354 "num_base_bdevs": 2, 00:20:38.354 "num_base_bdevs_discovered": 2, 00:20:38.354 "num_base_bdevs_operational": 2, 00:20:38.354 "process": { 00:20:38.354 "type": "rebuild", 00:20:38.354 "target": "spare", 00:20:38.354 "progress": { 00:20:38.354 "blocks": 20480, 00:20:38.354 "percent": 32 00:20:38.354 } 00:20:38.354 }, 00:20:38.354 "base_bdevs_list": [ 00:20:38.354 { 00:20:38.354 "name": "spare", 00:20:38.354 "uuid": "847b7638-c657-5705-a5cd-19652bce8df3", 00:20:38.354 "is_configured": true, 00:20:38.354 "data_offset": 2048, 00:20:38.354 "data_size": 63488 00:20:38.354 }, 00:20:38.354 { 00:20:38.354 "name": "BaseBdev2", 00:20:38.354 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:38.354 "is_configured": true, 00:20:38.354 "data_offset": 2048, 00:20:38.354 "data_size": 63488 00:20:38.354 } 00:20:38.354 ] 00:20:38.354 }' 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.354 [2024-11-20 05:31:10.161325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.354 [2024-11-20 05:31:10.165247] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:38.354 [2024-11-20 05:31:10.165300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.354 [2024-11-20 05:31:10.165314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.354 [2024-11-20 05:31:10.165321] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.354 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.612 "name": "raid_bdev1", 00:20:38.612 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:38.612 "strip_size_kb": 0, 00:20:38.612 "state": "online", 00:20:38.612 "raid_level": "raid1", 00:20:38.612 "superblock": true, 00:20:38.612 "num_base_bdevs": 2, 00:20:38.612 "num_base_bdevs_discovered": 1, 00:20:38.612 "num_base_bdevs_operational": 1, 00:20:38.612 "base_bdevs_list": [ 00:20:38.612 { 00:20:38.612 "name": null, 00:20:38.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.612 "is_configured": false, 00:20:38.612 "data_offset": 0, 00:20:38.612 "data_size": 63488 00:20:38.612 }, 00:20:38.612 { 00:20:38.612 "name": "BaseBdev2", 00:20:38.612 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:38.612 "is_configured": true, 00:20:38.612 "data_offset": 2048, 00:20:38.612 "data_size": 63488 00:20:38.612 } 00:20:38.612 ] 00:20:38.612 }' 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.612 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.871 "name": "raid_bdev1", 00:20:38.871 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:38.871 "strip_size_kb": 0, 00:20:38.871 "state": "online", 00:20:38.871 "raid_level": "raid1", 00:20:38.871 "superblock": true, 00:20:38.871 "num_base_bdevs": 2, 00:20:38.871 "num_base_bdevs_discovered": 1, 00:20:38.871 "num_base_bdevs_operational": 1, 00:20:38.871 "base_bdevs_list": [ 00:20:38.871 { 00:20:38.871 "name": null, 00:20:38.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.871 "is_configured": false, 00:20:38.871 "data_offset": 0, 00:20:38.871 "data_size": 63488 00:20:38.871 }, 00:20:38.871 { 00:20:38.871 "name": "BaseBdev2", 00:20:38.871 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:38.871 "is_configured": true, 00:20:38.871 "data_offset": 2048, 00:20:38.871 "data_size": 63488 00:20:38.871 } 00:20:38.871 ] 00:20:38.871 }' 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.871 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.871 [2024-11-20 05:31:10.616491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:38.871 [2024-11-20 05:31:10.616547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.871 [2024-11-20 05:31:10.616568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:38.871 [2024-11-20 05:31:10.616576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.871 [2024-11-20 05:31:10.616985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.871 [2024-11-20 05:31:10.617008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:38.871 [2024-11-20 05:31:10.617085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:38.872 [2024-11-20 05:31:10.617098] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:38.872 [2024-11-20 05:31:10.617107] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:38.872 [2024-11-20 05:31:10.617116] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:38.872 BaseBdev1 00:20:38.872 05:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.872 05:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.814 05:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.073 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.073 "name": "raid_bdev1", 00:20:40.073 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:40.073 "strip_size_kb": 0, 00:20:40.073 "state": "online", 00:20:40.073 "raid_level": "raid1", 00:20:40.073 "superblock": true, 00:20:40.073 "num_base_bdevs": 2, 00:20:40.073 "num_base_bdevs_discovered": 1, 00:20:40.073 "num_base_bdevs_operational": 1, 00:20:40.073 "base_bdevs_list": [ 00:20:40.073 { 00:20:40.073 "name": null, 00:20:40.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.073 "is_configured": false, 00:20:40.073 "data_offset": 0, 00:20:40.073 "data_size": 63488 00:20:40.073 }, 00:20:40.073 { 00:20:40.073 "name": "BaseBdev2", 00:20:40.073 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:40.073 "is_configured": true, 00:20:40.073 "data_offset": 2048, 00:20:40.073 "data_size": 63488 00:20:40.073 } 00:20:40.073 ] 00:20:40.073 }' 00:20:40.073 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.073 05:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.331 "name": "raid_bdev1", 00:20:40.331 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:40.331 "strip_size_kb": 0, 00:20:40.331 "state": "online", 00:20:40.331 "raid_level": "raid1", 00:20:40.331 "superblock": true, 00:20:40.331 "num_base_bdevs": 2, 00:20:40.331 "num_base_bdevs_discovered": 1, 00:20:40.331 "num_base_bdevs_operational": 1, 00:20:40.331 "base_bdevs_list": [ 00:20:40.331 { 00:20:40.331 "name": null, 00:20:40.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.331 "is_configured": false, 00:20:40.331 "data_offset": 0, 00:20:40.331 "data_size": 63488 00:20:40.331 }, 00:20:40.331 { 00:20:40.331 "name": "BaseBdev2", 00:20:40.331 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:40.331 "is_configured": true, 00:20:40.331 "data_offset": 2048, 00:20:40.331 "data_size": 63488 00:20:40.331 } 00:20:40.331 ] 00:20:40.331 }' 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:40.331 05:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.331 [2024-11-20 05:31:12.028795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.331 [2024-11-20 05:31:12.028946] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:40.331 [2024-11-20 05:31:12.028967] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:40.331 request: 00:20:40.331 { 00:20:40.331 "base_bdev": "BaseBdev1", 00:20:40.331 "raid_bdev": "raid_bdev1", 00:20:40.331 "method": "bdev_raid_add_base_bdev", 00:20:40.331 "req_id": 1 00:20:40.331 } 00:20:40.331 Got JSON-RPC error response 00:20:40.331 response: 00:20:40.331 { 00:20:40.331 "code": -22, 00:20:40.331 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:40.331 } 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:40.331 05:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.263 "name": "raid_bdev1", 00:20:41.263 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:41.263 "strip_size_kb": 0, 00:20:41.263 "state": "online", 00:20:41.263 "raid_level": "raid1", 00:20:41.263 "superblock": true, 00:20:41.263 "num_base_bdevs": 2, 00:20:41.263 "num_base_bdevs_discovered": 1, 00:20:41.263 "num_base_bdevs_operational": 1, 00:20:41.263 "base_bdevs_list": [ 00:20:41.263 { 00:20:41.263 "name": null, 00:20:41.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.263 "is_configured": false, 00:20:41.263 "data_offset": 0, 00:20:41.263 "data_size": 63488 00:20:41.263 }, 00:20:41.263 { 00:20:41.263 "name": "BaseBdev2", 00:20:41.263 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:41.263 "is_configured": true, 00:20:41.263 "data_offset": 2048, 00:20:41.263 "data_size": 63488 00:20:41.263 } 00:20:41.263 ] 00:20:41.263 }' 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.263 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.828 "name": "raid_bdev1", 00:20:41.828 "uuid": "6f7e917d-3fd1-42f8-867e-c5aa54195803", 00:20:41.828 "strip_size_kb": 0, 00:20:41.828 "state": "online", 00:20:41.828 "raid_level": "raid1", 00:20:41.828 "superblock": true, 00:20:41.828 "num_base_bdevs": 2, 00:20:41.828 "num_base_bdevs_discovered": 1, 00:20:41.828 "num_base_bdevs_operational": 1, 00:20:41.828 "base_bdevs_list": [ 00:20:41.828 { 00:20:41.828 "name": null, 00:20:41.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.828 "is_configured": false, 00:20:41.828 "data_offset": 0, 00:20:41.828 "data_size": 63488 00:20:41.828 }, 00:20:41.828 { 00:20:41.828 "name": "BaseBdev2", 00:20:41.828 "uuid": "66062776-9ecd-5cd8-b786-2c8e22082619", 00:20:41.828 "is_configured": true, 00:20:41.828 "data_offset": 2048, 00:20:41.828 "data_size": 63488 00:20:41.828 } 00:20:41.828 ] 00:20:41.828 }' 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 73726 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73726 ']' 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 73726 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73726 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:41.828 killing process with pid 73726 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73726' 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 73726 00:20:41.828 Received shutdown signal, test time was about 60.000000 seconds 00:20:41.828 00:20:41.828 Latency(us) 00:20:41.828 [2024-11-20T05:31:13.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.828 [2024-11-20T05:31:13.663Z] =================================================================================================================== 00:20:41.828 [2024-11-20T05:31:13.663Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.828 [2024-11-20 05:31:13.503882] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.828 05:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 73726 00:20:41.829 [2024-11-20 05:31:13.504003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.829 [2024-11-20 05:31:13.504060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.829 [2024-11-20 05:31:13.504070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:42.087 [2024-11-20 05:31:13.661326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:42.653 00:20:42.653 real 0m20.955s 00:20:42.653 user 0m24.495s 00:20:42.653 sys 0m3.478s 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.653 ************************************ 00:20:42.653 END TEST raid_rebuild_test_sb 00:20:42.653 ************************************ 00:20:42.653 05:31:14 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:20:42.653 05:31:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:42.653 05:31:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:42.653 05:31:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.653 ************************************ 00:20:42.653 START TEST raid_rebuild_test_io 00:20:42.653 ************************************ 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:42.653 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74439 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74439 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 74439 ']' 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:42.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:42.654 05:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:42.654 [2024-11-20 05:31:14.373109] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:20:42.654 [2024-11-20 05:31:14.373243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:20:42.654 Zero copy mechanism will not be used. 00:20:42.654 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74439 ] 00:20:42.913 [2024-11-20 05:31:14.527861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.913 [2024-11-20 05:31:14.625257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.171 [2024-11-20 05:31:14.745624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.171 [2024-11-20 05:31:14.745669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.430 BaseBdev1_malloc 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.430 [2024-11-20 05:31:15.206272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:43.430 [2024-11-20 05:31:15.206337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.430 [2024-11-20 05:31:15.206357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:43.430 [2024-11-20 05:31:15.206381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.430 [2024-11-20 05:31:15.208200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.430 [2024-11-20 05:31:15.208235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:43.430 BaseBdev1 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.430 BaseBdev2_malloc 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.430 [2024-11-20 05:31:15.239766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:43.430 [2024-11-20 05:31:15.239823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.430 [2024-11-20 05:31:15.239839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:43.430 [2024-11-20 05:31:15.239849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.430 [2024-11-20 05:31:15.241715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.430 [2024-11-20 05:31:15.241747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:43.430 BaseBdev2 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.430 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.689 spare_malloc 00:20:43.689 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.689 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:43.689 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.689 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.689 spare_delay 00:20:43.689 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.689 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:43.689 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.689 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.689 [2024-11-20 05:31:15.299343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:43.689 [2024-11-20 05:31:15.299425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.690 [2024-11-20 05:31:15.299442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:43.690 [2024-11-20 05:31:15.299453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.690 [2024-11-20 05:31:15.301354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.690 [2024-11-20 05:31:15.301397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:43.690 spare 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.690 [2024-11-20 05:31:15.307403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.690 [2024-11-20 05:31:15.309039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.690 [2024-11-20 05:31:15.309121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:43.690 [2024-11-20 05:31:15.309133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:43.690 [2024-11-20 05:31:15.309381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:43.690 [2024-11-20 05:31:15.309518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:43.690 [2024-11-20 05:31:15.309532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:43.690 [2024-11-20 05:31:15.309662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.690 "name": "raid_bdev1", 00:20:43.690 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:43.690 "strip_size_kb": 0, 00:20:43.690 "state": "online", 00:20:43.690 "raid_level": "raid1", 00:20:43.690 "superblock": false, 00:20:43.690 "num_base_bdevs": 2, 00:20:43.690 "num_base_bdevs_discovered": 2, 00:20:43.690 "num_base_bdevs_operational": 2, 00:20:43.690 "base_bdevs_list": [ 00:20:43.690 { 00:20:43.690 "name": "BaseBdev1", 00:20:43.690 "uuid": "2009ce30-6a02-5ebe-a1d0-d33a9098e3a9", 00:20:43.690 "is_configured": true, 00:20:43.690 "data_offset": 0, 00:20:43.690 "data_size": 65536 00:20:43.690 }, 00:20:43.690 { 00:20:43.690 "name": "BaseBdev2", 00:20:43.690 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:43.690 "is_configured": true, 00:20:43.690 "data_offset": 0, 00:20:43.690 "data_size": 65536 00:20:43.690 } 00:20:43.690 ] 00:20:43.690 }' 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.690 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:43.948 [2024-11-20 05:31:15.639728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.948 [2024-11-20 05:31:15.699434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.948 "name": "raid_bdev1", 00:20:43.948 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:43.948 "strip_size_kb": 0, 00:20:43.948 "state": "online", 00:20:43.948 "raid_level": "raid1", 00:20:43.948 "superblock": false, 00:20:43.948 "num_base_bdevs": 2, 00:20:43.948 "num_base_bdevs_discovered": 1, 00:20:43.948 "num_base_bdevs_operational": 1, 00:20:43.948 "base_bdevs_list": [ 00:20:43.948 { 00:20:43.948 "name": null, 00:20:43.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.948 "is_configured": false, 00:20:43.948 "data_offset": 0, 00:20:43.948 "data_size": 65536 00:20:43.948 }, 00:20:43.948 { 00:20:43.948 "name": "BaseBdev2", 00:20:43.948 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:43.948 "is_configured": true, 00:20:43.948 "data_offset": 0, 00:20:43.948 "data_size": 65536 00:20:43.948 } 00:20:43.948 ] 00:20:43.948 }' 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.948 05:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:44.207 [2024-11-20 05:31:15.788468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:44.207 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:44.207 Zero copy mechanism will not be used. 00:20:44.207 Running I/O for 60 seconds... 00:20:44.207 05:31:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:44.207 05:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.207 05:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:44.207 [2024-11-20 05:31:16.035016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:44.465 05:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.465 05:31:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:44.465 [2024-11-20 05:31:16.075019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:44.465 [2024-11-20 05:31:16.076728] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:44.465 [2024-11-20 05:31:16.183205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:44.465 [2024-11-20 05:31:16.183772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:44.723 [2024-11-20 05:31:16.302146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:44.723 [2024-11-20 05:31:16.302458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:44.981 [2024-11-20 05:31:16.645849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:45.239 157.00 IOPS, 471.00 MiB/s [2024-11-20T05:31:17.074Z] [2024-11-20 05:31:16.871020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:45.239 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.239 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.239 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.496 "name": "raid_bdev1", 00:20:45.496 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:45.496 "strip_size_kb": 0, 00:20:45.496 "state": "online", 00:20:45.496 "raid_level": "raid1", 00:20:45.496 "superblock": false, 00:20:45.496 "num_base_bdevs": 2, 00:20:45.496 "num_base_bdevs_discovered": 2, 00:20:45.496 "num_base_bdevs_operational": 2, 00:20:45.496 "process": { 00:20:45.496 "type": "rebuild", 00:20:45.496 "target": "spare", 00:20:45.496 "progress": { 00:20:45.496 "blocks": 12288, 00:20:45.496 "percent": 18 00:20:45.496 } 00:20:45.496 }, 00:20:45.496 "base_bdevs_list": [ 00:20:45.496 { 00:20:45.496 "name": "spare", 00:20:45.496 "uuid": "40eddcf9-19c7-52d2-a29c-a66915a3d49b", 00:20:45.496 "is_configured": true, 00:20:45.496 "data_offset": 0, 00:20:45.496 "data_size": 65536 00:20:45.496 }, 00:20:45.496 { 00:20:45.496 "name": "BaseBdev2", 00:20:45.496 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:45.496 "is_configured": true, 00:20:45.496 "data_offset": 0, 00:20:45.496 "data_size": 65536 00:20:45.496 } 00:20:45.496 ] 00:20:45.496 }' 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:45.496 [2024-11-20 05:31:17.177336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:45.496 [2024-11-20 05:31:17.184138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.496 [2024-11-20 05:31:17.195057] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:45.496 [2024-11-20 05:31:17.197212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.496 [2024-11-20 05:31:17.197242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.496 [2024-11-20 05:31:17.197251] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:45.496 [2024-11-20 05:31:17.228870] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.496 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.497 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.497 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:45.497 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.497 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.497 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.497 "name": "raid_bdev1", 00:20:45.497 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:45.497 "strip_size_kb": 0, 00:20:45.497 "state": "online", 00:20:45.497 "raid_level": "raid1", 00:20:45.497 "superblock": false, 00:20:45.497 "num_base_bdevs": 2, 00:20:45.497 "num_base_bdevs_discovered": 1, 00:20:45.497 "num_base_bdevs_operational": 1, 00:20:45.497 "base_bdevs_list": [ 00:20:45.497 { 00:20:45.497 "name": null, 00:20:45.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.497 "is_configured": false, 00:20:45.497 "data_offset": 0, 00:20:45.497 "data_size": 65536 00:20:45.497 }, 00:20:45.497 { 00:20:45.497 "name": "BaseBdev2", 00:20:45.497 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:45.497 "is_configured": true, 00:20:45.497 "data_offset": 0, 00:20:45.497 "data_size": 65536 00:20:45.497 } 00:20:45.497 ] 00:20:45.497 }' 00:20:45.497 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.497 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.755 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.755 "name": "raid_bdev1", 00:20:45.755 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:45.755 "strip_size_kb": 0, 00:20:45.755 "state": "online", 00:20:45.755 "raid_level": "raid1", 00:20:45.755 "superblock": false, 00:20:45.755 "num_base_bdevs": 2, 00:20:45.755 "num_base_bdevs_discovered": 1, 00:20:45.755 "num_base_bdevs_operational": 1, 00:20:45.755 "base_bdevs_list": [ 00:20:45.755 { 00:20:45.755 "name": null, 00:20:45.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.755 "is_configured": false, 00:20:45.755 "data_offset": 0, 00:20:45.756 "data_size": 65536 00:20:45.756 }, 00:20:45.756 { 00:20:45.756 "name": "BaseBdev2", 00:20:45.756 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:45.756 "is_configured": true, 00:20:45.756 "data_offset": 0, 00:20:45.756 "data_size": 65536 00:20:45.756 } 00:20:45.756 ] 00:20:45.756 }' 00:20:45.756 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.014 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:46.014 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.014 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:46.015 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:46.015 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.015 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:46.015 [2024-11-20 05:31:17.655167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:46.015 05:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.015 05:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:46.015 [2024-11-20 05:31:17.686115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:46.015 [2024-11-20 05:31:17.687926] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:46.015 200.00 IOPS, 600.00 MiB/s [2024-11-20T05:31:17.850Z] [2024-11-20 05:31:17.804496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:46.015 [2024-11-20 05:31:17.804993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:46.272 [2024-11-20 05:31:17.924209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:46.272 [2024-11-20 05:31:17.924518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:46.531 [2024-11-20 05:31:18.253566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:46.789 [2024-11-20 05:31:18.384226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:46.790 [2024-11-20 05:31:18.621069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.048 "name": "raid_bdev1", 00:20:47.048 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:47.048 "strip_size_kb": 0, 00:20:47.048 "state": "online", 00:20:47.048 "raid_level": "raid1", 00:20:47.048 "superblock": false, 00:20:47.048 "num_base_bdevs": 2, 00:20:47.048 "num_base_bdevs_discovered": 2, 00:20:47.048 "num_base_bdevs_operational": 2, 00:20:47.048 "process": { 00:20:47.048 "type": "rebuild", 00:20:47.048 "target": "spare", 00:20:47.048 "progress": { 00:20:47.048 "blocks": 14336, 00:20:47.048 "percent": 21 00:20:47.048 } 00:20:47.048 }, 00:20:47.048 "base_bdevs_list": [ 00:20:47.048 { 00:20:47.048 "name": "spare", 00:20:47.048 "uuid": "40eddcf9-19c7-52d2-a29c-a66915a3d49b", 00:20:47.048 "is_configured": true, 00:20:47.048 "data_offset": 0, 00:20:47.048 "data_size": 65536 00:20:47.048 }, 00:20:47.048 { 00:20:47.048 "name": "BaseBdev2", 00:20:47.048 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:47.048 "is_configured": true, 00:20:47.048 "data_offset": 0, 00:20:47.048 "data_size": 65536 00:20:47.048 } 00:20:47.048 ] 00:20:47.048 }' 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.048 [2024-11-20 05:31:18.735799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=313 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:47.048 162.67 IOPS, 488.00 MiB/s [2024-11-20T05:31:18.883Z] 05:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.048 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.048 "name": "raid_bdev1", 00:20:47.048 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:47.048 "strip_size_kb": 0, 00:20:47.048 "state": "online", 00:20:47.048 "raid_level": "raid1", 00:20:47.048 "superblock": false, 00:20:47.048 "num_base_bdevs": 2, 00:20:47.048 "num_base_bdevs_discovered": 2, 00:20:47.048 "num_base_bdevs_operational": 2, 00:20:47.048 "process": { 00:20:47.048 "type": "rebuild", 00:20:47.048 "target": "spare", 00:20:47.048 "progress": { 00:20:47.048 "blocks": 16384, 00:20:47.048 "percent": 25 00:20:47.048 } 00:20:47.048 }, 00:20:47.048 "base_bdevs_list": [ 00:20:47.048 { 00:20:47.048 "name": "spare", 00:20:47.048 "uuid": "40eddcf9-19c7-52d2-a29c-a66915a3d49b", 00:20:47.048 "is_configured": true, 00:20:47.048 "data_offset": 0, 00:20:47.048 "data_size": 65536 00:20:47.048 }, 00:20:47.048 { 00:20:47.049 "name": "BaseBdev2", 00:20:47.049 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:47.049 "is_configured": true, 00:20:47.049 "data_offset": 0, 00:20:47.049 "data_size": 65536 00:20:47.049 } 00:20:47.049 ] 00:20:47.049 }' 00:20:47.049 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.049 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.049 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.330 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.330 05:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:47.603 [2024-11-20 05:31:19.174937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:47.860 [2024-11-20 05:31:19.543626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:48.118 [2024-11-20 05:31:19.746692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:48.118 138.00 IOPS, 414.00 MiB/s [2024-11-20T05:31:19.953Z] 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.119 "name": "raid_bdev1", 00:20:48.119 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:48.119 "strip_size_kb": 0, 00:20:48.119 "state": "online", 00:20:48.119 "raid_level": "raid1", 00:20:48.119 "superblock": false, 00:20:48.119 "num_base_bdevs": 2, 00:20:48.119 "num_base_bdevs_discovered": 2, 00:20:48.119 "num_base_bdevs_operational": 2, 00:20:48.119 "process": { 00:20:48.119 "type": "rebuild", 00:20:48.119 "target": "spare", 00:20:48.119 "progress": { 00:20:48.119 "blocks": 30720, 00:20:48.119 "percent": 46 00:20:48.119 } 00:20:48.119 }, 00:20:48.119 "base_bdevs_list": [ 00:20:48.119 { 00:20:48.119 "name": "spare", 00:20:48.119 "uuid": "40eddcf9-19c7-52d2-a29c-a66915a3d49b", 00:20:48.119 "is_configured": true, 00:20:48.119 "data_offset": 0, 00:20:48.119 "data_size": 65536 00:20:48.119 }, 00:20:48.119 { 00:20:48.119 "name": "BaseBdev2", 00:20:48.119 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:48.119 "is_configured": true, 00:20:48.119 "data_offset": 0, 00:20:48.119 "data_size": 65536 00:20:48.119 } 00:20:48.119 ] 00:20:48.119 }' 00:20:48.119 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.376 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.376 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:48.376 [2024-11-20 05:31:19.969771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:48.376 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.376 05:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:48.634 [2024-11-20 05:31:20.419326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:49.201 123.40 IOPS, 370.20 MiB/s [2024-11-20T05:31:21.036Z] 05:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:49.201 05:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.201 05:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.201 05:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:49.201 05:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:49.201 05:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.201 05:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.201 05:31:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.201 05:31:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:49.201 05:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.201 05:31:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.201 05:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.201 "name": "raid_bdev1", 00:20:49.201 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:49.201 "strip_size_kb": 0, 00:20:49.201 "state": "online", 00:20:49.201 "raid_level": "raid1", 00:20:49.201 "superblock": false, 00:20:49.201 "num_base_bdevs": 2, 00:20:49.201 "num_base_bdevs_discovered": 2, 00:20:49.201 "num_base_bdevs_operational": 2, 00:20:49.201 "process": { 00:20:49.201 "type": "rebuild", 00:20:49.201 "target": "spare", 00:20:49.201 "progress": { 00:20:49.201 "blocks": 49152, 00:20:49.201 "percent": 75 00:20:49.201 } 00:20:49.201 }, 00:20:49.201 "base_bdevs_list": [ 00:20:49.201 { 00:20:49.201 "name": "spare", 00:20:49.201 "uuid": "40eddcf9-19c7-52d2-a29c-a66915a3d49b", 00:20:49.201 "is_configured": true, 00:20:49.201 "data_offset": 0, 00:20:49.201 "data_size": 65536 00:20:49.201 }, 00:20:49.201 { 00:20:49.201 "name": "BaseBdev2", 00:20:49.201 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:49.201 "is_configured": true, 00:20:49.201 "data_offset": 0, 00:20:49.201 "data_size": 65536 00:20:49.201 } 00:20:49.201 ] 00:20:49.201 }' 00:20:49.201 05:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.459 05:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.459 05:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.459 05:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.459 05:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:49.459 [2024-11-20 05:31:21.092130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:49.717 [2024-11-20 05:31:21.303178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:49.717 [2024-11-20 05:31:21.303488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:49.717 [2024-11-20 05:31:21.530439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:49.974 [2024-11-20 05:31:21.732112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:49.974 [2024-11-20 05:31:21.732478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:50.539 107.67 IOPS, 323.00 MiB/s [2024-11-20T05:31:22.374Z] [2024-11-20 05:31:22.082189] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.539 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.539 "name": "raid_bdev1", 00:20:50.539 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:50.539 "strip_size_kb": 0, 00:20:50.539 "state": "online", 00:20:50.539 "raid_level": "raid1", 00:20:50.539 "superblock": false, 00:20:50.539 "num_base_bdevs": 2, 00:20:50.539 "num_base_bdevs_discovered": 2, 00:20:50.539 "num_base_bdevs_operational": 2, 00:20:50.539 "process": { 00:20:50.539 "type": "rebuild", 00:20:50.539 "target": "spare", 00:20:50.539 "progress": { 00:20:50.539 "blocks": 65536, 00:20:50.539 "percent": 100 00:20:50.539 } 00:20:50.539 }, 00:20:50.539 "base_bdevs_list": [ 00:20:50.539 { 00:20:50.539 "name": "spare", 00:20:50.539 "uuid": "40eddcf9-19c7-52d2-a29c-a66915a3d49b", 00:20:50.539 "is_configured": true, 00:20:50.539 "data_offset": 0, 00:20:50.539 "data_size": 65536 00:20:50.539 }, 00:20:50.539 { 00:20:50.539 "name": "BaseBdev2", 00:20:50.539 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:50.539 "is_configured": true, 00:20:50.539 "data_offset": 0, 00:20:50.540 "data_size": 65536 00:20:50.540 } 00:20:50.540 ] 00:20:50.540 }' 00:20:50.540 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.540 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.540 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.540 [2024-11-20 05:31:22.182179] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:50.540 [2024-11-20 05:31:22.184361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.540 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.540 05:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:51.412 96.57 IOPS, 289.71 MiB/s [2024-11-20T05:31:23.247Z] 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.412 "name": "raid_bdev1", 00:20:51.412 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:51.412 "strip_size_kb": 0, 00:20:51.412 "state": "online", 00:20:51.412 "raid_level": "raid1", 00:20:51.412 "superblock": false, 00:20:51.412 "num_base_bdevs": 2, 00:20:51.412 "num_base_bdevs_discovered": 2, 00:20:51.412 "num_base_bdevs_operational": 2, 00:20:51.412 "base_bdevs_list": [ 00:20:51.412 { 00:20:51.412 "name": "spare", 00:20:51.412 "uuid": "40eddcf9-19c7-52d2-a29c-a66915a3d49b", 00:20:51.412 "is_configured": true, 00:20:51.412 "data_offset": 0, 00:20:51.412 "data_size": 65536 00:20:51.412 }, 00:20:51.412 { 00:20:51.412 "name": "BaseBdev2", 00:20:51.412 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:51.412 "is_configured": true, 00:20:51.412 "data_offset": 0, 00:20:51.412 "data_size": 65536 00:20:51.412 } 00:20:51.412 ] 00:20:51.412 }' 00:20:51.412 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.670 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.670 "name": "raid_bdev1", 00:20:51.670 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:51.670 "strip_size_kb": 0, 00:20:51.670 "state": "online", 00:20:51.670 "raid_level": "raid1", 00:20:51.670 "superblock": false, 00:20:51.670 "num_base_bdevs": 2, 00:20:51.670 "num_base_bdevs_discovered": 2, 00:20:51.670 "num_base_bdevs_operational": 2, 00:20:51.670 "base_bdevs_list": [ 00:20:51.670 { 00:20:51.670 "name": "spare", 00:20:51.670 "uuid": "40eddcf9-19c7-52d2-a29c-a66915a3d49b", 00:20:51.670 "is_configured": true, 00:20:51.670 "data_offset": 0, 00:20:51.670 "data_size": 65536 00:20:51.670 }, 00:20:51.670 { 00:20:51.670 "name": "BaseBdev2", 00:20:51.671 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:51.671 "is_configured": true, 00:20:51.671 "data_offset": 0, 00:20:51.671 "data_size": 65536 00:20:51.671 } 00:20:51.671 ] 00:20:51.671 }' 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.671 "name": "raid_bdev1", 00:20:51.671 "uuid": "d0cec37b-1d41-4aed-af37-ff8aeece41bf", 00:20:51.671 "strip_size_kb": 0, 00:20:51.671 "state": "online", 00:20:51.671 "raid_level": "raid1", 00:20:51.671 "superblock": false, 00:20:51.671 "num_base_bdevs": 2, 00:20:51.671 "num_base_bdevs_discovered": 2, 00:20:51.671 "num_base_bdevs_operational": 2, 00:20:51.671 "base_bdevs_list": [ 00:20:51.671 { 00:20:51.671 "name": "spare", 00:20:51.671 "uuid": "40eddcf9-19c7-52d2-a29c-a66915a3d49b", 00:20:51.671 "is_configured": true, 00:20:51.671 "data_offset": 0, 00:20:51.671 "data_size": 65536 00:20:51.671 }, 00:20:51.671 { 00:20:51.671 "name": "BaseBdev2", 00:20:51.671 "uuid": "ee7cd947-b2e5-5e7e-a679-e59601dcb255", 00:20:51.671 "is_configured": true, 00:20:51.671 "data_offset": 0, 00:20:51.671 "data_size": 65536 00:20:51.671 } 00:20:51.671 ] 00:20:51.671 }' 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.671 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.929 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:51.929 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.929 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.929 [2024-11-20 05:31:23.716007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.929 [2024-11-20 05:31:23.716044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:52.188 00:20:52.188 Latency(us) 00:20:52.188 [2024-11-20T05:31:24.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.188 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:52.188 raid_bdev1 : 8.01 89.69 269.06 0.00 0.00 15558.26 252.06 112923.57 00:20:52.188 [2024-11-20T05:31:24.023Z] =================================================================================================================== 00:20:52.188 [2024-11-20T05:31:24.023Z] Total : 89.69 269.06 0.00 0.00 15558.26 252.06 112923.57 00:20:52.188 [2024-11-20 05:31:23.808905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.188 [2024-11-20 05:31:23.808964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.188 [2024-11-20 05:31:23.809049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.188 [2024-11-20 05:31:23.809059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:52.188 { 00:20:52.188 "results": [ 00:20:52.188 { 00:20:52.188 "job": "raid_bdev1", 00:20:52.188 "core_mask": "0x1", 00:20:52.188 "workload": "randrw", 00:20:52.188 "percentage": 50, 00:20:52.188 "status": "finished", 00:20:52.188 "queue_depth": 2, 00:20:52.188 "io_size": 3145728, 00:20:52.188 "runtime": 8.005712, 00:20:52.188 "iops": 89.68596422154582, 00:20:52.188 "mibps": 269.05789266463745, 00:20:52.188 "io_failed": 0, 00:20:52.188 "io_timeout": 0, 00:20:52.188 "avg_latency_us": 15558.260398542963, 00:20:52.188 "min_latency_us": 252.06153846153848, 00:20:52.188 "max_latency_us": 112923.56923076924 00:20:52.188 } 00:20:52.188 ], 00:20:52.188 "core_count": 1 00:20:52.188 } 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:52.188 05:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:52.446 /dev/nbd0 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.446 1+0 records in 00:20:52.446 1+0 records out 00:20:52.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301871 s, 13.6 MB/s 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:52.446 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:52.447 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:52.447 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:52.447 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:52.447 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:52.447 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:52.704 /dev/nbd1 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.704 1+0 records in 00:20:52.704 1+0 records out 00:20:52.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292218 s, 14.0 MB/s 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.704 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.023 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 74439 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 74439 ']' 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 74439 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74439 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:53.281 killing process with pid 74439 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74439' 00:20:53.281 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 74439 00:20:53.281 Received shutdown signal, test time was about 9.110081 seconds 00:20:53.281 00:20:53.281 Latency(us) 00:20:53.282 [2024-11-20T05:31:25.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.282 [2024-11-20T05:31:25.117Z] =================================================================================================================== 00:20:53.282 [2024-11-20T05:31:25.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.282 [2024-11-20 05:31:24.900373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:53.282 05:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 74439 00:20:53.282 [2024-11-20 05:31:25.023074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:53.845 05:31:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:53.845 00:20:53.845 real 0m11.360s 00:20:53.845 user 0m13.848s 00:20:53.845 sys 0m1.127s 00:20:53.845 05:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:53.845 05:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:53.845 ************************************ 00:20:53.845 END TEST raid_rebuild_test_io 00:20:53.845 ************************************ 00:20:54.102 05:31:25 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:20:54.102 05:31:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:54.102 05:31:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:54.102 05:31:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:54.102 ************************************ 00:20:54.102 START TEST raid_rebuild_test_sb_io 00:20:54.102 ************************************ 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74817 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74817 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 74817 ']' 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:54.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.102 05:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:54.102 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:54.102 Zero copy mechanism will not be used. 00:20:54.102 [2024-11-20 05:31:25.772711] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:20:54.102 [2024-11-20 05:31:25.772835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74817 ] 00:20:54.102 [2024-11-20 05:31:25.923456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.358 [2024-11-20 05:31:26.024938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.358 [2024-11-20 05:31:26.145168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.358 [2024-11-20 05:31:26.145205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.976 BaseBdev1_malloc 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.976 [2024-11-20 05:31:26.658872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:54.976 [2024-11-20 05:31:26.658936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.976 [2024-11-20 05:31:26.658955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:54.976 [2024-11-20 05:31:26.658965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.976 [2024-11-20 05:31:26.660870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.976 [2024-11-20 05:31:26.660905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:54.976 BaseBdev1 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.976 BaseBdev2_malloc 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.976 [2024-11-20 05:31:26.692317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:54.976 [2024-11-20 05:31:26.692387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.976 [2024-11-20 05:31:26.692404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:54.976 [2024-11-20 05:31:26.692412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.976 [2024-11-20 05:31:26.694196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.976 [2024-11-20 05:31:26.694228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:54.976 BaseBdev2 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.976 spare_malloc 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.976 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.977 spare_delay 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.977 [2024-11-20 05:31:26.751065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:54.977 [2024-11-20 05:31:26.751120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.977 [2024-11-20 05:31:26.751136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:54.977 [2024-11-20 05:31:26.751145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.977 [2024-11-20 05:31:26.752994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.977 [2024-11-20 05:31:26.753156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:54.977 spare 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.977 [2024-11-20 05:31:26.759117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.977 [2024-11-20 05:31:26.760790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.977 [2024-11-20 05:31:26.760988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:54.977 [2024-11-20 05:31:26.761079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:54.977 [2024-11-20 05:31:26.761323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:54.977 [2024-11-20 05:31:26.761543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:54.977 [2024-11-20 05:31:26.761599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:54.977 [2024-11-20 05:31:26.761812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.977 "name": "raid_bdev1", 00:20:54.977 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:20:54.977 "strip_size_kb": 0, 00:20:54.977 "state": "online", 00:20:54.977 "raid_level": "raid1", 00:20:54.977 "superblock": true, 00:20:54.977 "num_base_bdevs": 2, 00:20:54.977 "num_base_bdevs_discovered": 2, 00:20:54.977 "num_base_bdevs_operational": 2, 00:20:54.977 "base_bdevs_list": [ 00:20:54.977 { 00:20:54.977 "name": "BaseBdev1", 00:20:54.977 "uuid": "cbe90bd3-0b8a-5f5a-824b-38edc3e6c5be", 00:20:54.977 "is_configured": true, 00:20:54.977 "data_offset": 2048, 00:20:54.977 "data_size": 63488 00:20:54.977 }, 00:20:54.977 { 00:20:54.977 "name": "BaseBdev2", 00:20:54.977 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:20:54.977 "is_configured": true, 00:20:54.977 "data_offset": 2048, 00:20:54.977 "data_size": 63488 00:20:54.977 } 00:20:54.977 ] 00:20:54.977 }' 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.977 05:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.234 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:55.234 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:55.234 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.234 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.234 [2024-11-20 05:31:27.055489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:55.526 [2024-11-20 05:31:27.111180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.526 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.527 "name": "raid_bdev1", 00:20:55.527 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:20:55.527 "strip_size_kb": 0, 00:20:55.527 "state": "online", 00:20:55.527 "raid_level": "raid1", 00:20:55.527 "superblock": true, 00:20:55.527 "num_base_bdevs": 2, 00:20:55.527 "num_base_bdevs_discovered": 1, 00:20:55.527 "num_base_bdevs_operational": 1, 00:20:55.527 "base_bdevs_list": [ 00:20:55.527 { 00:20:55.527 "name": null, 00:20:55.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.527 "is_configured": false, 00:20:55.527 "data_offset": 0, 00:20:55.527 "data_size": 63488 00:20:55.527 }, 00:20:55.527 { 00:20:55.527 "name": "BaseBdev2", 00:20:55.527 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:20:55.527 "is_configured": true, 00:20:55.527 "data_offset": 2048, 00:20:55.527 "data_size": 63488 00:20:55.527 } 00:20:55.527 ] 00:20:55.527 }' 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.527 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.527 [2024-11-20 05:31:27.196677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:55.527 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:55.527 Zero copy mechanism will not be used. 00:20:55.527 Running I/O for 60 seconds... 00:20:55.793 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:55.793 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.793 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.793 [2024-11-20 05:31:27.428645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:55.793 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.793 05:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:55.793 [2024-11-20 05:31:27.462974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:55.793 [2024-11-20 05:31:27.464826] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:55.793 [2024-11-20 05:31:27.583894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:55.793 [2024-11-20 05:31:27.584434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:56.050 [2024-11-20 05:31:27.799335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:56.050 [2024-11-20 05:31:27.799653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:56.307 [2024-11-20 05:31:28.119726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:56.307 [2024-11-20 05:31:28.120245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:56.565 150.00 IOPS, 450.00 MiB/s [2024-11-20T05:31:28.400Z] [2024-11-20 05:31:28.329173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.823 "name": "raid_bdev1", 00:20:56.823 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:20:56.823 "strip_size_kb": 0, 00:20:56.823 "state": "online", 00:20:56.823 "raid_level": "raid1", 00:20:56.823 "superblock": true, 00:20:56.823 "num_base_bdevs": 2, 00:20:56.823 "num_base_bdevs_discovered": 2, 00:20:56.823 "num_base_bdevs_operational": 2, 00:20:56.823 "process": { 00:20:56.823 "type": "rebuild", 00:20:56.823 "target": "spare", 00:20:56.823 "progress": { 00:20:56.823 "blocks": 12288, 00:20:56.823 "percent": 19 00:20:56.823 } 00:20:56.823 }, 00:20:56.823 "base_bdevs_list": [ 00:20:56.823 { 00:20:56.823 "name": "spare", 00:20:56.823 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:20:56.823 "is_configured": true, 00:20:56.823 "data_offset": 2048, 00:20:56.823 "data_size": 63488 00:20:56.823 }, 00:20:56.823 { 00:20:56.823 "name": "BaseBdev2", 00:20:56.823 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:20:56.823 "is_configured": true, 00:20:56.823 "data_offset": 2048, 00:20:56.823 "data_size": 63488 00:20:56.823 } 00:20:56.823 ] 00:20:56.823 }' 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.823 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:56.823 [2024-11-20 05:31:28.558129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:56.823 [2024-11-20 05:31:28.558654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:56.823 [2024-11-20 05:31:28.560048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.081 [2024-11-20 05:31:28.672393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:57.081 [2024-11-20 05:31:28.684752] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:57.081 [2024-11-20 05:31:28.698133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.081 [2024-11-20 05:31:28.698294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.081 [2024-11-20 05:31:28.698339] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:57.081 [2024-11-20 05:31:28.728328] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.082 "name": "raid_bdev1", 00:20:57.082 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:20:57.082 "strip_size_kb": 0, 00:20:57.082 "state": "online", 00:20:57.082 "raid_level": "raid1", 00:20:57.082 "superblock": true, 00:20:57.082 "num_base_bdevs": 2, 00:20:57.082 "num_base_bdevs_discovered": 1, 00:20:57.082 "num_base_bdevs_operational": 1, 00:20:57.082 "base_bdevs_list": [ 00:20:57.082 { 00:20:57.082 "name": null, 00:20:57.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.082 "is_configured": false, 00:20:57.082 "data_offset": 0, 00:20:57.082 "data_size": 63488 00:20:57.082 }, 00:20:57.082 { 00:20:57.082 "name": "BaseBdev2", 00:20:57.082 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:20:57.082 "is_configured": true, 00:20:57.082 "data_offset": 2048, 00:20:57.082 "data_size": 63488 00:20:57.082 } 00:20:57.082 ] 00:20:57.082 }' 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.082 05:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.340 "name": "raid_bdev1", 00:20:57.340 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:20:57.340 "strip_size_kb": 0, 00:20:57.340 "state": "online", 00:20:57.340 "raid_level": "raid1", 00:20:57.340 "superblock": true, 00:20:57.340 "num_base_bdevs": 2, 00:20:57.340 "num_base_bdevs_discovered": 1, 00:20:57.340 "num_base_bdevs_operational": 1, 00:20:57.340 "base_bdevs_list": [ 00:20:57.340 { 00:20:57.340 "name": null, 00:20:57.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.340 "is_configured": false, 00:20:57.340 "data_offset": 0, 00:20:57.340 "data_size": 63488 00:20:57.340 }, 00:20:57.340 { 00:20:57.340 "name": "BaseBdev2", 00:20:57.340 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:20:57.340 "is_configured": true, 00:20:57.340 "data_offset": 2048, 00:20:57.340 "data_size": 63488 00:20:57.340 } 00:20:57.340 ] 00:20:57.340 }' 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.340 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.340 [2024-11-20 05:31:29.167316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:57.597 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.597 05:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:57.597 168.50 IOPS, 505.50 MiB/s [2024-11-20T05:31:29.432Z] [2024-11-20 05:31:29.209249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:57.597 [2024-11-20 05:31:29.210947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:57.597 [2024-11-20 05:31:29.313346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:57.597 [2024-11-20 05:31:29.314031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:57.855 [2024-11-20 05:31:29.530021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:57.855 [2024-11-20 05:31:29.530518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:58.113 [2024-11-20 05:31:29.859857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:58.113 [2024-11-20 05:31:29.860492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:58.377 [2024-11-20 05:31:30.069044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:58.377 [2024-11-20 05:31:30.069526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:58.377 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.377 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.377 141.67 IOPS, 425.00 MiB/s [2024-11-20T05:31:30.212Z] 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.377 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.377 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.377 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.377 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.377 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.377 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.637 "name": "raid_bdev1", 00:20:58.637 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:20:58.637 "strip_size_kb": 0, 00:20:58.637 "state": "online", 00:20:58.637 "raid_level": "raid1", 00:20:58.637 "superblock": true, 00:20:58.637 "num_base_bdevs": 2, 00:20:58.637 "num_base_bdevs_discovered": 2, 00:20:58.637 "num_base_bdevs_operational": 2, 00:20:58.637 "process": { 00:20:58.637 "type": "rebuild", 00:20:58.637 "target": "spare", 00:20:58.637 "progress": { 00:20:58.637 "blocks": 10240, 00:20:58.637 "percent": 16 00:20:58.637 } 00:20:58.637 }, 00:20:58.637 "base_bdevs_list": [ 00:20:58.637 { 00:20:58.637 "name": "spare", 00:20:58.637 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:20:58.637 "is_configured": true, 00:20:58.637 "data_offset": 2048, 00:20:58.637 "data_size": 63488 00:20:58.637 }, 00:20:58.637 { 00:20:58.637 "name": "BaseBdev2", 00:20:58.637 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:20:58.637 "is_configured": true, 00:20:58.637 "data_offset": 2048, 00:20:58.637 "data_size": 63488 00:20:58.637 } 00:20:58.637 ] 00:20:58.637 }' 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:58.637 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=325 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.637 "name": "raid_bdev1", 00:20:58.637 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:20:58.637 "strip_size_kb": 0, 00:20:58.637 "state": "online", 00:20:58.637 "raid_level": "raid1", 00:20:58.637 "superblock": true, 00:20:58.637 "num_base_bdevs": 2, 00:20:58.637 "num_base_bdevs_discovered": 2, 00:20:58.637 "num_base_bdevs_operational": 2, 00:20:58.637 "process": { 00:20:58.637 "type": "rebuild", 00:20:58.637 "target": "spare", 00:20:58.637 "progress": { 00:20:58.637 "blocks": 12288, 00:20:58.637 "percent": 19 00:20:58.637 } 00:20:58.637 }, 00:20:58.637 "base_bdevs_list": [ 00:20:58.637 { 00:20:58.637 "name": "spare", 00:20:58.637 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:20:58.637 "is_configured": true, 00:20:58.637 "data_offset": 2048, 00:20:58.637 "data_size": 63488 00:20:58.637 }, 00:20:58.637 { 00:20:58.637 "name": "BaseBdev2", 00:20:58.637 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:20:58.637 "is_configured": true, 00:20:58.637 "data_offset": 2048, 00:20:58.637 "data_size": 63488 00:20:58.637 } 00:20:58.637 ] 00:20:58.637 }' 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.637 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.638 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.638 05:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.204 [2024-11-20 05:31:30.731184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:59.204 [2024-11-20 05:31:30.844633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:59.463 [2024-11-20 05:31:31.163977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:59.463 [2024-11-20 05:31:31.164543] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:59.721 120.50 IOPS, 361.50 MiB/s [2024-11-20T05:31:31.556Z] [2024-11-20 05:31:31.379437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:59.721 [2024-11-20 05:31:31.379765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.721 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.721 "name": "raid_bdev1", 00:20:59.721 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:20:59.721 "strip_size_kb": 0, 00:20:59.721 "state": "online", 00:20:59.721 "raid_level": "raid1", 00:20:59.721 "superblock": true, 00:20:59.721 "num_base_bdevs": 2, 00:20:59.721 "num_base_bdevs_discovered": 2, 00:20:59.721 "num_base_bdevs_operational": 2, 00:20:59.721 "process": { 00:20:59.721 "type": "rebuild", 00:20:59.721 "target": "spare", 00:20:59.721 "progress": { 00:20:59.722 "blocks": 28672, 00:20:59.722 "percent": 45 00:20:59.722 } 00:20:59.722 }, 00:20:59.722 "base_bdevs_list": [ 00:20:59.722 { 00:20:59.722 "name": "spare", 00:20:59.722 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:20:59.722 "is_configured": true, 00:20:59.722 "data_offset": 2048, 00:20:59.722 "data_size": 63488 00:20:59.722 }, 00:20:59.722 { 00:20:59.722 "name": "BaseBdev2", 00:20:59.722 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:20:59.722 "is_configured": true, 00:20:59.722 "data_offset": 2048, 00:20:59.722 "data_size": 63488 00:20:59.722 } 00:20:59.722 ] 00:20:59.722 }' 00:20:59.722 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.722 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.722 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.722 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.722 05:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:00.287 [2024-11-20 05:31:32.038691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:00.806 104.60 IOPS, 313.80 MiB/s [2024-11-20T05:31:32.641Z] 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.806 "name": "raid_bdev1", 00:21:00.806 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:00.806 "strip_size_kb": 0, 00:21:00.806 "state": "online", 00:21:00.806 "raid_level": "raid1", 00:21:00.806 "superblock": true, 00:21:00.806 "num_base_bdevs": 2, 00:21:00.806 "num_base_bdevs_discovered": 2, 00:21:00.806 "num_base_bdevs_operational": 2, 00:21:00.806 "process": { 00:21:00.806 "type": "rebuild", 00:21:00.806 "target": "spare", 00:21:00.806 "progress": { 00:21:00.806 "blocks": 47104, 00:21:00.806 "percent": 74 00:21:00.806 } 00:21:00.806 }, 00:21:00.806 "base_bdevs_list": [ 00:21:00.806 { 00:21:00.806 "name": "spare", 00:21:00.806 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:21:00.806 "is_configured": true, 00:21:00.806 "data_offset": 2048, 00:21:00.806 "data_size": 63488 00:21:00.806 }, 00:21:00.806 { 00:21:00.806 "name": "BaseBdev2", 00:21:00.806 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:00.806 "is_configured": true, 00:21:00.806 "data_offset": 2048, 00:21:00.806 "data_size": 63488 00:21:00.806 } 00:21:00.806 ] 00:21:00.806 }' 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.806 05:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:01.067 [2024-11-20 05:31:32.673750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:01.326 [2024-11-20 05:31:33.112448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:01.584 93.00 IOPS, 279.00 MiB/s [2024-11-20T05:31:33.419Z] [2024-11-20 05:31:33.342221] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:01.843 [2024-11-20 05:31:33.442271] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:01.843 [2024-11-20 05:31:33.450065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.843 "name": "raid_bdev1", 00:21:01.843 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:01.843 "strip_size_kb": 0, 00:21:01.843 "state": "online", 00:21:01.843 "raid_level": "raid1", 00:21:01.843 "superblock": true, 00:21:01.843 "num_base_bdevs": 2, 00:21:01.843 "num_base_bdevs_discovered": 2, 00:21:01.843 "num_base_bdevs_operational": 2, 00:21:01.843 "base_bdevs_list": [ 00:21:01.843 { 00:21:01.843 "name": "spare", 00:21:01.843 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:21:01.843 "is_configured": true, 00:21:01.843 "data_offset": 2048, 00:21:01.843 "data_size": 63488 00:21:01.843 }, 00:21:01.843 { 00:21:01.843 "name": "BaseBdev2", 00:21:01.843 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:01.843 "is_configured": true, 00:21:01.843 "data_offset": 2048, 00:21:01.843 "data_size": 63488 00:21:01.843 } 00:21:01.843 ] 00:21:01.843 }' 00:21:01.843 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.101 "name": "raid_bdev1", 00:21:02.101 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:02.101 "strip_size_kb": 0, 00:21:02.101 "state": "online", 00:21:02.101 "raid_level": "raid1", 00:21:02.101 "superblock": true, 00:21:02.101 "num_base_bdevs": 2, 00:21:02.101 "num_base_bdevs_discovered": 2, 00:21:02.101 "num_base_bdevs_operational": 2, 00:21:02.101 "base_bdevs_list": [ 00:21:02.101 { 00:21:02.101 "name": "spare", 00:21:02.101 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:21:02.101 "is_configured": true, 00:21:02.101 "data_offset": 2048, 00:21:02.101 "data_size": 63488 00:21:02.101 }, 00:21:02.101 { 00:21:02.101 "name": "BaseBdev2", 00:21:02.101 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:02.101 "is_configured": true, 00:21:02.101 "data_offset": 2048, 00:21:02.101 "data_size": 63488 00:21:02.101 } 00:21:02.101 ] 00:21:02.101 }' 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.101 "name": "raid_bdev1", 00:21:02.101 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:02.101 "strip_size_kb": 0, 00:21:02.101 "state": "online", 00:21:02.101 "raid_level": "raid1", 00:21:02.101 "superblock": true, 00:21:02.101 "num_base_bdevs": 2, 00:21:02.101 "num_base_bdevs_discovered": 2, 00:21:02.101 "num_base_bdevs_operational": 2, 00:21:02.101 "base_bdevs_list": [ 00:21:02.101 { 00:21:02.101 "name": "spare", 00:21:02.101 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:21:02.101 "is_configured": true, 00:21:02.101 "data_offset": 2048, 00:21:02.101 "data_size": 63488 00:21:02.101 }, 00:21:02.101 { 00:21:02.101 "name": "BaseBdev2", 00:21:02.101 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:02.101 "is_configured": true, 00:21:02.101 "data_offset": 2048, 00:21:02.101 "data_size": 63488 00:21:02.101 } 00:21:02.101 ] 00:21:02.101 }' 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.101 05:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:02.359 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.359 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.359 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:02.359 [2024-11-20 05:31:34.177947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.359 [2024-11-20 05:31:34.177979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.617 84.57 IOPS, 253.71 MiB/s 00:21:02.617 Latency(us) 00:21:02.617 [2024-11-20T05:31:34.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.617 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:02.617 raid_bdev1 : 7.01 84.63 253.89 0.00 0.00 15984.12 242.61 108890.58 00:21:02.617 [2024-11-20T05:31:34.452Z] =================================================================================================================== 00:21:02.617 [2024-11-20T05:31:34.452Z] Total : 84.63 253.89 0.00 0.00 15984.12 242.61 108890.58 00:21:02.617 [2024-11-20 05:31:34.218850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.617 [2024-11-20 05:31:34.219017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.617 [2024-11-20 05:31:34.219119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.617 [2024-11-20 05:31:34.219183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:02.617 { 00:21:02.617 "results": [ 00:21:02.617 { 00:21:02.617 "job": "raid_bdev1", 00:21:02.617 "core_mask": "0x1", 00:21:02.617 "workload": "randrw", 00:21:02.617 "percentage": 50, 00:21:02.617 "status": "finished", 00:21:02.617 "queue_depth": 2, 00:21:02.617 "io_size": 3145728, 00:21:02.617 "runtime": 7.007078, 00:21:02.617 "iops": 84.62871399462087, 00:21:02.617 "mibps": 253.88614198386261, 00:21:02.617 "io_failed": 0, 00:21:02.617 "io_timeout": 0, 00:21:02.617 "avg_latency_us": 15984.119299520042, 00:21:02.617 "min_latency_us": 242.60923076923078, 00:21:02.617 "max_latency_us": 108890.58461538462 00:21:02.617 } 00:21:02.617 ], 00:21:02.617 "core_count": 1 00:21:02.617 } 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.618 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:02.876 /dev/nbd0 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:02.876 1+0 records in 00:21:02.876 1+0 records out 00:21:02.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512932 s, 8.0 MB/s 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:02.876 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.877 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:03.135 /dev/nbd1 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:03.135 1+0 records in 00:21:03.135 1+0 records out 00:21:03.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030524 s, 13.4 MB/s 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.135 05:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:03.392 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:03.392 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.393 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.650 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.651 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:03.651 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.651 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.651 [2024-11-20 05:31:35.395375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:03.651 [2024-11-20 05:31:35.395432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.651 [2024-11-20 05:31:35.395450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:03.651 [2024-11-20 05:31:35.395460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.651 [2024-11-20 05:31:35.397450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.651 [2024-11-20 05:31:35.397482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:03.651 [2024-11-20 05:31:35.397566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:03.651 [2024-11-20 05:31:35.397609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.651 [2024-11-20 05:31:35.397729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:03.651 spare 00:21:03.651 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.651 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:03.651 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.651 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.909 [2024-11-20 05:31:35.497829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:03.909 [2024-11-20 05:31:35.497875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:03.909 [2024-11-20 05:31:35.498198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:21:03.909 [2024-11-20 05:31:35.498397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:03.909 [2024-11-20 05:31:35.498410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:03.909 [2024-11-20 05:31:35.498586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.909 "name": "raid_bdev1", 00:21:03.909 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:03.909 "strip_size_kb": 0, 00:21:03.909 "state": "online", 00:21:03.909 "raid_level": "raid1", 00:21:03.909 "superblock": true, 00:21:03.909 "num_base_bdevs": 2, 00:21:03.909 "num_base_bdevs_discovered": 2, 00:21:03.909 "num_base_bdevs_operational": 2, 00:21:03.909 "base_bdevs_list": [ 00:21:03.909 { 00:21:03.909 "name": "spare", 00:21:03.909 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:21:03.909 "is_configured": true, 00:21:03.909 "data_offset": 2048, 00:21:03.909 "data_size": 63488 00:21:03.909 }, 00:21:03.909 { 00:21:03.909 "name": "BaseBdev2", 00:21:03.909 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:03.909 "is_configured": true, 00:21:03.909 "data_offset": 2048, 00:21:03.909 "data_size": 63488 00:21:03.909 } 00:21:03.909 ] 00:21:03.909 }' 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.909 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.168 "name": "raid_bdev1", 00:21:04.168 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:04.168 "strip_size_kb": 0, 00:21:04.168 "state": "online", 00:21:04.168 "raid_level": "raid1", 00:21:04.168 "superblock": true, 00:21:04.168 "num_base_bdevs": 2, 00:21:04.168 "num_base_bdevs_discovered": 2, 00:21:04.168 "num_base_bdevs_operational": 2, 00:21:04.168 "base_bdevs_list": [ 00:21:04.168 { 00:21:04.168 "name": "spare", 00:21:04.168 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:21:04.168 "is_configured": true, 00:21:04.168 "data_offset": 2048, 00:21:04.168 "data_size": 63488 00:21:04.168 }, 00:21:04.168 { 00:21:04.168 "name": "BaseBdev2", 00:21:04.168 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:04.168 "is_configured": true, 00:21:04.168 "data_offset": 2048, 00:21:04.168 "data_size": 63488 00:21:04.168 } 00:21:04.168 ] 00:21:04.168 }' 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.168 [2024-11-20 05:31:35.943608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.168 "name": "raid_bdev1", 00:21:04.168 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:04.168 "strip_size_kb": 0, 00:21:04.168 "state": "online", 00:21:04.168 "raid_level": "raid1", 00:21:04.168 "superblock": true, 00:21:04.168 "num_base_bdevs": 2, 00:21:04.168 "num_base_bdevs_discovered": 1, 00:21:04.168 "num_base_bdevs_operational": 1, 00:21:04.168 "base_bdevs_list": [ 00:21:04.168 { 00:21:04.168 "name": null, 00:21:04.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.168 "is_configured": false, 00:21:04.168 "data_offset": 0, 00:21:04.168 "data_size": 63488 00:21:04.168 }, 00:21:04.168 { 00:21:04.168 "name": "BaseBdev2", 00:21:04.168 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:04.168 "is_configured": true, 00:21:04.168 "data_offset": 2048, 00:21:04.168 "data_size": 63488 00:21:04.168 } 00:21:04.168 ] 00:21:04.168 }' 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.168 05:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.733 05:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:04.733 05:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.734 05:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.734 [2024-11-20 05:31:36.295723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.734 [2024-11-20 05:31:36.295925] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:04.734 [2024-11-20 05:31:36.295941] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:04.734 [2024-11-20 05:31:36.295978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.734 [2024-11-20 05:31:36.305814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:21:04.734 05:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.734 05:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:04.734 [2024-11-20 05:31:36.307632] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:05.739 "name": "raid_bdev1", 00:21:05.739 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:05.739 "strip_size_kb": 0, 00:21:05.739 "state": "online", 00:21:05.739 "raid_level": "raid1", 00:21:05.739 "superblock": true, 00:21:05.739 "num_base_bdevs": 2, 00:21:05.739 "num_base_bdevs_discovered": 2, 00:21:05.739 "num_base_bdevs_operational": 2, 00:21:05.739 "process": { 00:21:05.739 "type": "rebuild", 00:21:05.739 "target": "spare", 00:21:05.739 "progress": { 00:21:05.739 "blocks": 20480, 00:21:05.739 "percent": 32 00:21:05.739 } 00:21:05.739 }, 00:21:05.739 "base_bdevs_list": [ 00:21:05.739 { 00:21:05.739 "name": "spare", 00:21:05.739 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:21:05.739 "is_configured": true, 00:21:05.739 "data_offset": 2048, 00:21:05.739 "data_size": 63488 00:21:05.739 }, 00:21:05.739 { 00:21:05.739 "name": "BaseBdev2", 00:21:05.739 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:05.739 "is_configured": true, 00:21:05.739 "data_offset": 2048, 00:21:05.739 "data_size": 63488 00:21:05.739 } 00:21:05.739 ] 00:21:05.739 }' 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.739 [2024-11-20 05:31:37.413897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.739 [2024-11-20 05:31:37.414301] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:05.739 [2024-11-20 05:31:37.414344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.739 [2024-11-20 05:31:37.414360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.739 [2024-11-20 05:31:37.414383] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.739 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.740 "name": "raid_bdev1", 00:21:05.740 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:05.740 "strip_size_kb": 0, 00:21:05.740 "state": "online", 00:21:05.740 "raid_level": "raid1", 00:21:05.740 "superblock": true, 00:21:05.740 "num_base_bdevs": 2, 00:21:05.740 "num_base_bdevs_discovered": 1, 00:21:05.740 "num_base_bdevs_operational": 1, 00:21:05.740 "base_bdevs_list": [ 00:21:05.740 { 00:21:05.740 "name": null, 00:21:05.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.740 "is_configured": false, 00:21:05.740 "data_offset": 0, 00:21:05.740 "data_size": 63488 00:21:05.740 }, 00:21:05.740 { 00:21:05.740 "name": "BaseBdev2", 00:21:05.740 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:05.740 "is_configured": true, 00:21:05.740 "data_offset": 2048, 00:21:05.740 "data_size": 63488 00:21:05.740 } 00:21:05.740 ] 00:21:05.740 }' 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.740 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.999 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:05.999 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.999 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.999 [2024-11-20 05:31:37.751753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:05.999 [2024-11-20 05:31:37.751828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.999 [2024-11-20 05:31:37.751849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:05.999 [2024-11-20 05:31:37.751858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.999 [2024-11-20 05:31:37.752314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.999 [2024-11-20 05:31:37.752328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:05.999 [2024-11-20 05:31:37.752433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:05.999 [2024-11-20 05:31:37.752445] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:05.999 [2024-11-20 05:31:37.752455] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:05.999 [2024-11-20 05:31:37.752477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.999 [2024-11-20 05:31:37.762469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:21:05.999 spare 00:21:05.999 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.999 05:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:05.999 [2024-11-20 05:31:37.764157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.371 "name": "raid_bdev1", 00:21:07.371 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:07.371 "strip_size_kb": 0, 00:21:07.371 "state": "online", 00:21:07.371 "raid_level": "raid1", 00:21:07.371 "superblock": true, 00:21:07.371 "num_base_bdevs": 2, 00:21:07.371 "num_base_bdevs_discovered": 2, 00:21:07.371 "num_base_bdevs_operational": 2, 00:21:07.371 "process": { 00:21:07.371 "type": "rebuild", 00:21:07.371 "target": "spare", 00:21:07.371 "progress": { 00:21:07.371 "blocks": 20480, 00:21:07.371 "percent": 32 00:21:07.371 } 00:21:07.371 }, 00:21:07.371 "base_bdevs_list": [ 00:21:07.371 { 00:21:07.371 "name": "spare", 00:21:07.371 "uuid": "97ad7d6a-9daf-5743-87cf-9a6d04bcca91", 00:21:07.371 "is_configured": true, 00:21:07.371 "data_offset": 2048, 00:21:07.371 "data_size": 63488 00:21:07.371 }, 00:21:07.371 { 00:21:07.371 "name": "BaseBdev2", 00:21:07.371 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:07.371 "is_configured": true, 00:21:07.371 "data_offset": 2048, 00:21:07.371 "data_size": 63488 00:21:07.371 } 00:21:07.371 ] 00:21:07.371 }' 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.371 [2024-11-20 05:31:38.862497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.371 [2024-11-20 05:31:38.870916] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:07.371 [2024-11-20 05:31:38.870984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.371 [2024-11-20 05:31:38.870997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.371 [2024-11-20 05:31:38.871005] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:07.371 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.372 "name": "raid_bdev1", 00:21:07.372 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:07.372 "strip_size_kb": 0, 00:21:07.372 "state": "online", 00:21:07.372 "raid_level": "raid1", 00:21:07.372 "superblock": true, 00:21:07.372 "num_base_bdevs": 2, 00:21:07.372 "num_base_bdevs_discovered": 1, 00:21:07.372 "num_base_bdevs_operational": 1, 00:21:07.372 "base_bdevs_list": [ 00:21:07.372 { 00:21:07.372 "name": null, 00:21:07.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.372 "is_configured": false, 00:21:07.372 "data_offset": 0, 00:21:07.372 "data_size": 63488 00:21:07.372 }, 00:21:07.372 { 00:21:07.372 "name": "BaseBdev2", 00:21:07.372 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:07.372 "is_configured": true, 00:21:07.372 "data_offset": 2048, 00:21:07.372 "data_size": 63488 00:21:07.372 } 00:21:07.372 ] 00:21:07.372 }' 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.372 05:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.629 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.629 "name": "raid_bdev1", 00:21:07.629 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:07.629 "strip_size_kb": 0, 00:21:07.629 "state": "online", 00:21:07.629 "raid_level": "raid1", 00:21:07.629 "superblock": true, 00:21:07.629 "num_base_bdevs": 2, 00:21:07.629 "num_base_bdevs_discovered": 1, 00:21:07.629 "num_base_bdevs_operational": 1, 00:21:07.629 "base_bdevs_list": [ 00:21:07.629 { 00:21:07.629 "name": null, 00:21:07.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.630 "is_configured": false, 00:21:07.630 "data_offset": 0, 00:21:07.630 "data_size": 63488 00:21:07.630 }, 00:21:07.630 { 00:21:07.630 "name": "BaseBdev2", 00:21:07.630 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:07.630 "is_configured": true, 00:21:07.630 "data_offset": 2048, 00:21:07.630 "data_size": 63488 00:21:07.630 } 00:21:07.630 ] 00:21:07.630 }' 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.630 [2024-11-20 05:31:39.316237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:07.630 [2024-11-20 05:31:39.316299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.630 [2024-11-20 05:31:39.316317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:07.630 [2024-11-20 05:31:39.316328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.630 [2024-11-20 05:31:39.316743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.630 [2024-11-20 05:31:39.316759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:07.630 [2024-11-20 05:31:39.316826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:07.630 [2024-11-20 05:31:39.316841] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:07.630 [2024-11-20 05:31:39.316848] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:07.630 [2024-11-20 05:31:39.316859] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:07.630 BaseBdev1 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.630 05:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.567 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.568 "name": "raid_bdev1", 00:21:08.568 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:08.568 "strip_size_kb": 0, 00:21:08.568 "state": "online", 00:21:08.568 "raid_level": "raid1", 00:21:08.568 "superblock": true, 00:21:08.568 "num_base_bdevs": 2, 00:21:08.568 "num_base_bdevs_discovered": 1, 00:21:08.568 "num_base_bdevs_operational": 1, 00:21:08.568 "base_bdevs_list": [ 00:21:08.568 { 00:21:08.568 "name": null, 00:21:08.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.568 "is_configured": false, 00:21:08.568 "data_offset": 0, 00:21:08.568 "data_size": 63488 00:21:08.568 }, 00:21:08.568 { 00:21:08.568 "name": "BaseBdev2", 00:21:08.568 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:08.568 "is_configured": true, 00:21:08.568 "data_offset": 2048, 00:21:08.568 "data_size": 63488 00:21:08.568 } 00:21:08.568 ] 00:21:08.568 }' 00:21:08.568 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.568 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.134 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.134 "name": "raid_bdev1", 00:21:09.134 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:09.134 "strip_size_kb": 0, 00:21:09.134 "state": "online", 00:21:09.134 "raid_level": "raid1", 00:21:09.134 "superblock": true, 00:21:09.134 "num_base_bdevs": 2, 00:21:09.134 "num_base_bdevs_discovered": 1, 00:21:09.134 "num_base_bdevs_operational": 1, 00:21:09.134 "base_bdevs_list": [ 00:21:09.134 { 00:21:09.134 "name": null, 00:21:09.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.134 "is_configured": false, 00:21:09.134 "data_offset": 0, 00:21:09.134 "data_size": 63488 00:21:09.134 }, 00:21:09.134 { 00:21:09.134 "name": "BaseBdev2", 00:21:09.134 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:09.135 "is_configured": true, 00:21:09.135 "data_offset": 2048, 00:21:09.135 "data_size": 63488 00:21:09.135 } 00:21:09.135 ] 00:21:09.135 }' 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:09.135 [2024-11-20 05:31:40.780724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.135 [2024-11-20 05:31:40.780888] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:09.135 [2024-11-20 05:31:40.780898] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:09.135 request: 00:21:09.135 { 00:21:09.135 "base_bdev": "BaseBdev1", 00:21:09.135 "raid_bdev": "raid_bdev1", 00:21:09.135 "method": "bdev_raid_add_base_bdev", 00:21:09.135 "req_id": 1 00:21:09.135 } 00:21:09.135 Got JSON-RPC error response 00:21:09.135 response: 00:21:09.135 { 00:21:09.135 "code": -22, 00:21:09.135 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:09.135 } 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:09.135 05:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.086 "name": "raid_bdev1", 00:21:10.086 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:10.086 "strip_size_kb": 0, 00:21:10.086 "state": "online", 00:21:10.086 "raid_level": "raid1", 00:21:10.086 "superblock": true, 00:21:10.086 "num_base_bdevs": 2, 00:21:10.086 "num_base_bdevs_discovered": 1, 00:21:10.086 "num_base_bdevs_operational": 1, 00:21:10.086 "base_bdevs_list": [ 00:21:10.086 { 00:21:10.086 "name": null, 00:21:10.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.086 "is_configured": false, 00:21:10.086 "data_offset": 0, 00:21:10.086 "data_size": 63488 00:21:10.086 }, 00:21:10.086 { 00:21:10.086 "name": "BaseBdev2", 00:21:10.086 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:10.086 "is_configured": true, 00:21:10.086 "data_offset": 2048, 00:21:10.086 "data_size": 63488 00:21:10.086 } 00:21:10.086 ] 00:21:10.086 }' 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.086 05:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.348 "name": "raid_bdev1", 00:21:10.348 "uuid": "387588e8-4faa-4474-b221-83187e87fdf7", 00:21:10.348 "strip_size_kb": 0, 00:21:10.348 "state": "online", 00:21:10.348 "raid_level": "raid1", 00:21:10.348 "superblock": true, 00:21:10.348 "num_base_bdevs": 2, 00:21:10.348 "num_base_bdevs_discovered": 1, 00:21:10.348 "num_base_bdevs_operational": 1, 00:21:10.348 "base_bdevs_list": [ 00:21:10.348 { 00:21:10.348 "name": null, 00:21:10.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.348 "is_configured": false, 00:21:10.348 "data_offset": 0, 00:21:10.348 "data_size": 63488 00:21:10.348 }, 00:21:10.348 { 00:21:10.348 "name": "BaseBdev2", 00:21:10.348 "uuid": "650599d0-f4e4-5598-9e29-ce32b1681f99", 00:21:10.348 "is_configured": true, 00:21:10.348 "data_offset": 2048, 00:21:10.348 "data_size": 63488 00:21:10.348 } 00:21:10.348 ] 00:21:10.348 }' 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.348 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 74817 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 74817 ']' 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 74817 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74817 00:21:10.607 killing process with pid 74817 00:21:10.607 Received shutdown signal, test time was about 15.005836 seconds 00:21:10.607 00:21:10.607 Latency(us) 00:21:10.607 [2024-11-20T05:31:42.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.607 [2024-11-20T05:31:42.442Z] =================================================================================================================== 00:21:10.607 [2024-11-20T05:31:42.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74817' 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 74817 00:21:10.607 [2024-11-20 05:31:42.204669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:10.607 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 74817 00:21:10.607 [2024-11-20 05:31:42.204794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.607 [2024-11-20 05:31:42.204849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:10.607 [2024-11-20 05:31:42.204857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:10.607 [2024-11-20 05:31:42.324531] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:11.191 ************************************ 00:21:11.191 END TEST raid_rebuild_test_sb_io 00:21:11.191 ************************************ 00:21:11.191 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:11.191 00:21:11.191 real 0m17.264s 00:21:11.191 user 0m22.021s 00:21:11.191 sys 0m1.526s 00:21:11.191 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:11.191 05:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:11.191 05:31:43 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:21:11.191 05:31:43 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:21:11.191 05:31:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:11.191 05:31:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:11.191 05:31:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:11.191 ************************************ 00:21:11.191 START TEST raid_rebuild_test 00:21:11.191 ************************************ 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.191 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:11.451 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75478 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75478 00:21:11.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75478 ']' 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.452 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:11.452 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:11.452 Zero copy mechanism will not be used. 00:21:11.452 [2024-11-20 05:31:43.090716] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:21:11.452 [2024-11-20 05:31:43.090832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75478 ] 00:21:11.452 [2024-11-20 05:31:43.250842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.710 [2024-11-20 05:31:43.367316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.710 [2024-11-20 05:31:43.515381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.710 [2024-11-20 05:31:43.515437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.275 BaseBdev1_malloc 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.275 [2024-11-20 05:31:43.982148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:12.275 [2024-11-20 05:31:43.982543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.275 [2024-11-20 05:31:43.982591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:12.275 [2024-11-20 05:31:43.982611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.275 [2024-11-20 05:31:43.985902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.275 [2024-11-20 05:31:43.986083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:12.275 BaseBdev1 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.275 05:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.275 BaseBdev2_malloc 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.275 [2024-11-20 05:31:44.031696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:12.275 [2024-11-20 05:31:44.031778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.275 [2024-11-20 05:31:44.031799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:12.275 [2024-11-20 05:31:44.031811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.275 [2024-11-20 05:31:44.034114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.275 [2024-11-20 05:31:44.034155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:12.275 BaseBdev2 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.275 BaseBdev3_malloc 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.275 [2024-11-20 05:31:44.082896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:12.275 [2024-11-20 05:31:44.082966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.275 [2024-11-20 05:31:44.082989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:12.275 [2024-11-20 05:31:44.083002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.275 [2024-11-20 05:31:44.085331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.275 [2024-11-20 05:31:44.085385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:12.275 BaseBdev3 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.275 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.534 BaseBdev4_malloc 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.534 [2024-11-20 05:31:44.126061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:12.534 [2024-11-20 05:31:44.126141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.534 [2024-11-20 05:31:44.126167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:12.534 [2024-11-20 05:31:44.126180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.534 [2024-11-20 05:31:44.128533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.534 [2024-11-20 05:31:44.128577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:12.534 BaseBdev4 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.534 spare_malloc 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.534 spare_delay 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.534 [2024-11-20 05:31:44.176458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:12.534 [2024-11-20 05:31:44.176521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.534 [2024-11-20 05:31:44.176541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:12.534 [2024-11-20 05:31:44.176553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.534 [2024-11-20 05:31:44.178817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.534 [2024-11-20 05:31:44.178852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:12.534 spare 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.534 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.534 [2024-11-20 05:31:44.184499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.535 [2024-11-20 05:31:44.186427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:12.535 [2024-11-20 05:31:44.186493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:12.535 [2024-11-20 05:31:44.186546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:12.535 [2024-11-20 05:31:44.186629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:12.535 [2024-11-20 05:31:44.186641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:12.535 [2024-11-20 05:31:44.186923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:12.535 [2024-11-20 05:31:44.187074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:12.535 [2024-11-20 05:31:44.187085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:12.535 [2024-11-20 05:31:44.187235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.535 "name": "raid_bdev1", 00:21:12.535 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:12.535 "strip_size_kb": 0, 00:21:12.535 "state": "online", 00:21:12.535 "raid_level": "raid1", 00:21:12.535 "superblock": false, 00:21:12.535 "num_base_bdevs": 4, 00:21:12.535 "num_base_bdevs_discovered": 4, 00:21:12.535 "num_base_bdevs_operational": 4, 00:21:12.535 "base_bdevs_list": [ 00:21:12.535 { 00:21:12.535 "name": "BaseBdev1", 00:21:12.535 "uuid": "ec17c10e-d529-5a7a-85db-9a9c896326bd", 00:21:12.535 "is_configured": true, 00:21:12.535 "data_offset": 0, 00:21:12.535 "data_size": 65536 00:21:12.535 }, 00:21:12.535 { 00:21:12.535 "name": "BaseBdev2", 00:21:12.535 "uuid": "e3d1f6c0-f591-531b-bc2a-715ed3f4efe3", 00:21:12.535 "is_configured": true, 00:21:12.535 "data_offset": 0, 00:21:12.535 "data_size": 65536 00:21:12.535 }, 00:21:12.535 { 00:21:12.535 "name": "BaseBdev3", 00:21:12.535 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:12.535 "is_configured": true, 00:21:12.535 "data_offset": 0, 00:21:12.535 "data_size": 65536 00:21:12.535 }, 00:21:12.535 { 00:21:12.535 "name": "BaseBdev4", 00:21:12.535 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:12.535 "is_configured": true, 00:21:12.535 "data_offset": 0, 00:21:12.535 "data_size": 65536 00:21:12.535 } 00:21:12.535 ] 00:21:12.535 }' 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.535 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.794 [2024-11-20 05:31:44.504963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.794 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:13.052 [2024-11-20 05:31:44.760694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:13.052 /dev/nbd0 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:13.052 1+0 records in 00:21:13.052 1+0 records out 00:21:13.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327776 s, 12.5 MB/s 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:13.052 05:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:19.610 65536+0 records in 00:21:19.610 65536+0 records out 00:21:19.610 33554432 bytes (34 MB, 32 MiB) copied, 6.23071 s, 5.4 MB/s 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:19.610 [2024-11-20 05:31:51.251954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:19.610 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.611 [2024-11-20 05:31:51.280046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.611 "name": "raid_bdev1", 00:21:19.611 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:19.611 "strip_size_kb": 0, 00:21:19.611 "state": "online", 00:21:19.611 "raid_level": "raid1", 00:21:19.611 "superblock": false, 00:21:19.611 "num_base_bdevs": 4, 00:21:19.611 "num_base_bdevs_discovered": 3, 00:21:19.611 "num_base_bdevs_operational": 3, 00:21:19.611 "base_bdevs_list": [ 00:21:19.611 { 00:21:19.611 "name": null, 00:21:19.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.611 "is_configured": false, 00:21:19.611 "data_offset": 0, 00:21:19.611 "data_size": 65536 00:21:19.611 }, 00:21:19.611 { 00:21:19.611 "name": "BaseBdev2", 00:21:19.611 "uuid": "e3d1f6c0-f591-531b-bc2a-715ed3f4efe3", 00:21:19.611 "is_configured": true, 00:21:19.611 "data_offset": 0, 00:21:19.611 "data_size": 65536 00:21:19.611 }, 00:21:19.611 { 00:21:19.611 "name": "BaseBdev3", 00:21:19.611 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:19.611 "is_configured": true, 00:21:19.611 "data_offset": 0, 00:21:19.611 "data_size": 65536 00:21:19.611 }, 00:21:19.611 { 00:21:19.611 "name": "BaseBdev4", 00:21:19.611 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:19.611 "is_configured": true, 00:21:19.611 "data_offset": 0, 00:21:19.611 "data_size": 65536 00:21:19.611 } 00:21:19.611 ] 00:21:19.611 }' 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.611 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.869 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:19.869 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.869 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.869 [2024-11-20 05:31:51.660113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:19.869 [2024-11-20 05:31:51.668693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:21:19.869 05:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.869 05:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:19.869 [2024-11-20 05:31:51.670441] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.240 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.240 "name": "raid_bdev1", 00:21:21.240 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:21.240 "strip_size_kb": 0, 00:21:21.240 "state": "online", 00:21:21.240 "raid_level": "raid1", 00:21:21.240 "superblock": false, 00:21:21.240 "num_base_bdevs": 4, 00:21:21.240 "num_base_bdevs_discovered": 4, 00:21:21.240 "num_base_bdevs_operational": 4, 00:21:21.240 "process": { 00:21:21.240 "type": "rebuild", 00:21:21.240 "target": "spare", 00:21:21.240 "progress": { 00:21:21.240 "blocks": 20480, 00:21:21.240 "percent": 31 00:21:21.240 } 00:21:21.240 }, 00:21:21.240 "base_bdevs_list": [ 00:21:21.240 { 00:21:21.240 "name": "spare", 00:21:21.240 "uuid": "3c1866b5-53e9-5adf-961c-ab5701b9ee67", 00:21:21.240 "is_configured": true, 00:21:21.240 "data_offset": 0, 00:21:21.240 "data_size": 65536 00:21:21.240 }, 00:21:21.240 { 00:21:21.240 "name": "BaseBdev2", 00:21:21.240 "uuid": "e3d1f6c0-f591-531b-bc2a-715ed3f4efe3", 00:21:21.240 "is_configured": true, 00:21:21.240 "data_offset": 0, 00:21:21.240 "data_size": 65536 00:21:21.240 }, 00:21:21.240 { 00:21:21.240 "name": "BaseBdev3", 00:21:21.240 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:21.240 "is_configured": true, 00:21:21.240 "data_offset": 0, 00:21:21.240 "data_size": 65536 00:21:21.240 }, 00:21:21.240 { 00:21:21.241 "name": "BaseBdev4", 00:21:21.241 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:21.241 "is_configured": true, 00:21:21.241 "data_offset": 0, 00:21:21.241 "data_size": 65536 00:21:21.241 } 00:21:21.241 ] 00:21:21.241 }' 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.241 [2024-11-20 05:31:52.784395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.241 [2024-11-20 05:31:52.877509] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:21.241 [2024-11-20 05:31:52.877599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.241 [2024-11-20 05:31:52.877615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.241 [2024-11-20 05:31:52.877626] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.241 "name": "raid_bdev1", 00:21:21.241 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:21.241 "strip_size_kb": 0, 00:21:21.241 "state": "online", 00:21:21.241 "raid_level": "raid1", 00:21:21.241 "superblock": false, 00:21:21.241 "num_base_bdevs": 4, 00:21:21.241 "num_base_bdevs_discovered": 3, 00:21:21.241 "num_base_bdevs_operational": 3, 00:21:21.241 "base_bdevs_list": [ 00:21:21.241 { 00:21:21.241 "name": null, 00:21:21.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.241 "is_configured": false, 00:21:21.241 "data_offset": 0, 00:21:21.241 "data_size": 65536 00:21:21.241 }, 00:21:21.241 { 00:21:21.241 "name": "BaseBdev2", 00:21:21.241 "uuid": "e3d1f6c0-f591-531b-bc2a-715ed3f4efe3", 00:21:21.241 "is_configured": true, 00:21:21.241 "data_offset": 0, 00:21:21.241 "data_size": 65536 00:21:21.241 }, 00:21:21.241 { 00:21:21.241 "name": "BaseBdev3", 00:21:21.241 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:21.241 "is_configured": true, 00:21:21.241 "data_offset": 0, 00:21:21.241 "data_size": 65536 00:21:21.241 }, 00:21:21.241 { 00:21:21.241 "name": "BaseBdev4", 00:21:21.241 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:21.241 "is_configured": true, 00:21:21.241 "data_offset": 0, 00:21:21.241 "data_size": 65536 00:21:21.241 } 00:21:21.241 ] 00:21:21.241 }' 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.241 05:31:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.500 "name": "raid_bdev1", 00:21:21.500 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:21.500 "strip_size_kb": 0, 00:21:21.500 "state": "online", 00:21:21.500 "raid_level": "raid1", 00:21:21.500 "superblock": false, 00:21:21.500 "num_base_bdevs": 4, 00:21:21.500 "num_base_bdevs_discovered": 3, 00:21:21.500 "num_base_bdevs_operational": 3, 00:21:21.500 "base_bdevs_list": [ 00:21:21.500 { 00:21:21.500 "name": null, 00:21:21.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.500 "is_configured": false, 00:21:21.500 "data_offset": 0, 00:21:21.500 "data_size": 65536 00:21:21.500 }, 00:21:21.500 { 00:21:21.500 "name": "BaseBdev2", 00:21:21.500 "uuid": "e3d1f6c0-f591-531b-bc2a-715ed3f4efe3", 00:21:21.500 "is_configured": true, 00:21:21.500 "data_offset": 0, 00:21:21.500 "data_size": 65536 00:21:21.500 }, 00:21:21.500 { 00:21:21.500 "name": "BaseBdev3", 00:21:21.500 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:21.500 "is_configured": true, 00:21:21.500 "data_offset": 0, 00:21:21.500 "data_size": 65536 00:21:21.500 }, 00:21:21.500 { 00:21:21.500 "name": "BaseBdev4", 00:21:21.500 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:21.500 "is_configured": true, 00:21:21.500 "data_offset": 0, 00:21:21.500 "data_size": 65536 00:21:21.500 } 00:21:21.500 ] 00:21:21.500 }' 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:21.500 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.759 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:21.759 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:21.759 05:31:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.759 05:31:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.759 [2024-11-20 05:31:53.362379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:21.759 [2024-11-20 05:31:53.370386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:21:21.759 05:31:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.759 05:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:21.759 [2024-11-20 05:31:53.372162] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.693 "name": "raid_bdev1", 00:21:22.693 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:22.693 "strip_size_kb": 0, 00:21:22.693 "state": "online", 00:21:22.693 "raid_level": "raid1", 00:21:22.693 "superblock": false, 00:21:22.693 "num_base_bdevs": 4, 00:21:22.693 "num_base_bdevs_discovered": 4, 00:21:22.693 "num_base_bdevs_operational": 4, 00:21:22.693 "process": { 00:21:22.693 "type": "rebuild", 00:21:22.693 "target": "spare", 00:21:22.693 "progress": { 00:21:22.693 "blocks": 20480, 00:21:22.693 "percent": 31 00:21:22.693 } 00:21:22.693 }, 00:21:22.693 "base_bdevs_list": [ 00:21:22.693 { 00:21:22.693 "name": "spare", 00:21:22.693 "uuid": "3c1866b5-53e9-5adf-961c-ab5701b9ee67", 00:21:22.693 "is_configured": true, 00:21:22.693 "data_offset": 0, 00:21:22.693 "data_size": 65536 00:21:22.693 }, 00:21:22.693 { 00:21:22.693 "name": "BaseBdev2", 00:21:22.693 "uuid": "e3d1f6c0-f591-531b-bc2a-715ed3f4efe3", 00:21:22.693 "is_configured": true, 00:21:22.693 "data_offset": 0, 00:21:22.693 "data_size": 65536 00:21:22.693 }, 00:21:22.693 { 00:21:22.693 "name": "BaseBdev3", 00:21:22.693 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:22.693 "is_configured": true, 00:21:22.693 "data_offset": 0, 00:21:22.693 "data_size": 65536 00:21:22.693 }, 00:21:22.693 { 00:21:22.693 "name": "BaseBdev4", 00:21:22.693 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:22.693 "is_configured": true, 00:21:22.693 "data_offset": 0, 00:21:22.693 "data_size": 65536 00:21:22.693 } 00:21:22.693 ] 00:21:22.693 }' 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.693 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.693 [2024-11-20 05:31:54.498240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:22.952 [2024-11-20 05:31:54.579298] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.952 "name": "raid_bdev1", 00:21:22.952 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:22.952 "strip_size_kb": 0, 00:21:22.952 "state": "online", 00:21:22.952 "raid_level": "raid1", 00:21:22.952 "superblock": false, 00:21:22.952 "num_base_bdevs": 4, 00:21:22.952 "num_base_bdevs_discovered": 3, 00:21:22.952 "num_base_bdevs_operational": 3, 00:21:22.952 "process": { 00:21:22.952 "type": "rebuild", 00:21:22.952 "target": "spare", 00:21:22.952 "progress": { 00:21:22.952 "blocks": 24576, 00:21:22.952 "percent": 37 00:21:22.952 } 00:21:22.952 }, 00:21:22.952 "base_bdevs_list": [ 00:21:22.952 { 00:21:22.952 "name": "spare", 00:21:22.952 "uuid": "3c1866b5-53e9-5adf-961c-ab5701b9ee67", 00:21:22.952 "is_configured": true, 00:21:22.952 "data_offset": 0, 00:21:22.952 "data_size": 65536 00:21:22.952 }, 00:21:22.952 { 00:21:22.952 "name": null, 00:21:22.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.952 "is_configured": false, 00:21:22.952 "data_offset": 0, 00:21:22.952 "data_size": 65536 00:21:22.952 }, 00:21:22.952 { 00:21:22.952 "name": "BaseBdev3", 00:21:22.952 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:22.952 "is_configured": true, 00:21:22.952 "data_offset": 0, 00:21:22.952 "data_size": 65536 00:21:22.952 }, 00:21:22.952 { 00:21:22.952 "name": "BaseBdev4", 00:21:22.952 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:22.952 "is_configured": true, 00:21:22.952 "data_offset": 0, 00:21:22.952 "data_size": 65536 00:21:22.952 } 00:21:22.952 ] 00:21:22.952 }' 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=349 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.952 "name": "raid_bdev1", 00:21:22.952 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:22.952 "strip_size_kb": 0, 00:21:22.952 "state": "online", 00:21:22.952 "raid_level": "raid1", 00:21:22.952 "superblock": false, 00:21:22.952 "num_base_bdevs": 4, 00:21:22.952 "num_base_bdevs_discovered": 3, 00:21:22.952 "num_base_bdevs_operational": 3, 00:21:22.952 "process": { 00:21:22.952 "type": "rebuild", 00:21:22.952 "target": "spare", 00:21:22.952 "progress": { 00:21:22.952 "blocks": 26624, 00:21:22.952 "percent": 40 00:21:22.952 } 00:21:22.952 }, 00:21:22.952 "base_bdevs_list": [ 00:21:22.952 { 00:21:22.952 "name": "spare", 00:21:22.952 "uuid": "3c1866b5-53e9-5adf-961c-ab5701b9ee67", 00:21:22.952 "is_configured": true, 00:21:22.952 "data_offset": 0, 00:21:22.952 "data_size": 65536 00:21:22.952 }, 00:21:22.952 { 00:21:22.952 "name": null, 00:21:22.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.952 "is_configured": false, 00:21:22.952 "data_offset": 0, 00:21:22.952 "data_size": 65536 00:21:22.952 }, 00:21:22.952 { 00:21:22.952 "name": "BaseBdev3", 00:21:22.952 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:22.952 "is_configured": true, 00:21:22.952 "data_offset": 0, 00:21:22.952 "data_size": 65536 00:21:22.952 }, 00:21:22.952 { 00:21:22.952 "name": "BaseBdev4", 00:21:22.952 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:22.952 "is_configured": true, 00:21:22.952 "data_offset": 0, 00:21:22.952 "data_size": 65536 00:21:22.952 } 00:21:22.952 ] 00:21:22.952 }' 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.952 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.953 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.210 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.210 05:31:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.144 "name": "raid_bdev1", 00:21:24.144 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:24.144 "strip_size_kb": 0, 00:21:24.144 "state": "online", 00:21:24.144 "raid_level": "raid1", 00:21:24.144 "superblock": false, 00:21:24.144 "num_base_bdevs": 4, 00:21:24.144 "num_base_bdevs_discovered": 3, 00:21:24.144 "num_base_bdevs_operational": 3, 00:21:24.144 "process": { 00:21:24.144 "type": "rebuild", 00:21:24.144 "target": "spare", 00:21:24.144 "progress": { 00:21:24.144 "blocks": 49152, 00:21:24.144 "percent": 75 00:21:24.144 } 00:21:24.144 }, 00:21:24.144 "base_bdevs_list": [ 00:21:24.144 { 00:21:24.144 "name": "spare", 00:21:24.144 "uuid": "3c1866b5-53e9-5adf-961c-ab5701b9ee67", 00:21:24.144 "is_configured": true, 00:21:24.144 "data_offset": 0, 00:21:24.144 "data_size": 65536 00:21:24.144 }, 00:21:24.144 { 00:21:24.144 "name": null, 00:21:24.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.144 "is_configured": false, 00:21:24.144 "data_offset": 0, 00:21:24.144 "data_size": 65536 00:21:24.144 }, 00:21:24.144 { 00:21:24.144 "name": "BaseBdev3", 00:21:24.144 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:24.144 "is_configured": true, 00:21:24.144 "data_offset": 0, 00:21:24.144 "data_size": 65536 00:21:24.144 }, 00:21:24.144 { 00:21:24.144 "name": "BaseBdev4", 00:21:24.144 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:24.144 "is_configured": true, 00:21:24.144 "data_offset": 0, 00:21:24.144 "data_size": 65536 00:21:24.144 } 00:21:24.144 ] 00:21:24.144 }' 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.144 05:31:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:25.078 [2024-11-20 05:31:56.591162] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:25.078 [2024-11-20 05:31:56.591261] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:25.078 [2024-11-20 05:31:56.591308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.337 "name": "raid_bdev1", 00:21:25.337 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:25.337 "strip_size_kb": 0, 00:21:25.337 "state": "online", 00:21:25.337 "raid_level": "raid1", 00:21:25.337 "superblock": false, 00:21:25.337 "num_base_bdevs": 4, 00:21:25.337 "num_base_bdevs_discovered": 3, 00:21:25.337 "num_base_bdevs_operational": 3, 00:21:25.337 "base_bdevs_list": [ 00:21:25.337 { 00:21:25.337 "name": "spare", 00:21:25.337 "uuid": "3c1866b5-53e9-5adf-961c-ab5701b9ee67", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 0, 00:21:25.337 "data_size": 65536 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "name": null, 00:21:25.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.337 "is_configured": false, 00:21:25.337 "data_offset": 0, 00:21:25.337 "data_size": 65536 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "name": "BaseBdev3", 00:21:25.337 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 0, 00:21:25.337 "data_size": 65536 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "name": "BaseBdev4", 00:21:25.337 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 0, 00:21:25.337 "data_size": 65536 00:21:25.337 } 00:21:25.337 ] 00:21:25.337 }' 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.337 05:31:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.337 "name": "raid_bdev1", 00:21:25.337 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:25.337 "strip_size_kb": 0, 00:21:25.337 "state": "online", 00:21:25.337 "raid_level": "raid1", 00:21:25.337 "superblock": false, 00:21:25.337 "num_base_bdevs": 4, 00:21:25.337 "num_base_bdevs_discovered": 3, 00:21:25.337 "num_base_bdevs_operational": 3, 00:21:25.337 "base_bdevs_list": [ 00:21:25.337 { 00:21:25.337 "name": "spare", 00:21:25.337 "uuid": "3c1866b5-53e9-5adf-961c-ab5701b9ee67", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 0, 00:21:25.337 "data_size": 65536 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "name": null, 00:21:25.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.337 "is_configured": false, 00:21:25.337 "data_offset": 0, 00:21:25.337 "data_size": 65536 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "name": "BaseBdev3", 00:21:25.337 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 0, 00:21:25.337 "data_size": 65536 00:21:25.337 }, 00:21:25.337 { 00:21:25.337 "name": "BaseBdev4", 00:21:25.337 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:25.337 "is_configured": true, 00:21:25.337 "data_offset": 0, 00:21:25.337 "data_size": 65536 00:21:25.337 } 00:21:25.337 ] 00:21:25.337 }' 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.337 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.338 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.338 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.338 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.338 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.596 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.596 "name": "raid_bdev1", 00:21:25.596 "uuid": "63cb2da2-d2b2-4814-a240-df7f91618072", 00:21:25.596 "strip_size_kb": 0, 00:21:25.596 "state": "online", 00:21:25.596 "raid_level": "raid1", 00:21:25.596 "superblock": false, 00:21:25.596 "num_base_bdevs": 4, 00:21:25.596 "num_base_bdevs_discovered": 3, 00:21:25.596 "num_base_bdevs_operational": 3, 00:21:25.596 "base_bdevs_list": [ 00:21:25.596 { 00:21:25.596 "name": "spare", 00:21:25.596 "uuid": "3c1866b5-53e9-5adf-961c-ab5701b9ee67", 00:21:25.596 "is_configured": true, 00:21:25.596 "data_offset": 0, 00:21:25.596 "data_size": 65536 00:21:25.596 }, 00:21:25.596 { 00:21:25.596 "name": null, 00:21:25.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.596 "is_configured": false, 00:21:25.596 "data_offset": 0, 00:21:25.596 "data_size": 65536 00:21:25.596 }, 00:21:25.596 { 00:21:25.596 "name": "BaseBdev3", 00:21:25.596 "uuid": "216a6052-6ce1-5b33-a8b5-f222256eb366", 00:21:25.596 "is_configured": true, 00:21:25.596 "data_offset": 0, 00:21:25.596 "data_size": 65536 00:21:25.596 }, 00:21:25.596 { 00:21:25.596 "name": "BaseBdev4", 00:21:25.596 "uuid": "5508010f-b619-542a-8f4c-f768b38b59e2", 00:21:25.596 "is_configured": true, 00:21:25.596 "data_offset": 0, 00:21:25.596 "data_size": 65536 00:21:25.596 } 00:21:25.596 ] 00:21:25.596 }' 00:21:25.596 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.596 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.853 [2024-11-20 05:31:57.460108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:25.853 [2024-11-20 05:31:57.460140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:25.853 [2024-11-20 05:31:57.460221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.853 [2024-11-20 05:31:57.460300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.853 [2024-11-20 05:31:57.460309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:25.853 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:26.111 /dev/nbd0 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:26.111 1+0 records in 00:21:26.111 1+0 records out 00:21:26.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284286 s, 14.4 MB/s 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:26.111 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:26.111 /dev/nbd1 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:26.369 1+0 records in 00:21:26.369 1+0 records out 00:21:26.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315309 s, 13.0 MB/s 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:26.369 05:31:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:26.369 05:31:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:26.369 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:26.369 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:26.369 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:26.369 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:26.369 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.369 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.627 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75478 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75478 ']' 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75478 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75478 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:26.886 killing process with pid 75478 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75478' 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75478 00:21:26.886 Received shutdown signal, test time was about 60.000000 seconds 00:21:26.886 00:21:26.886 Latency(us) 00:21:26.886 [2024-11-20T05:31:58.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.886 [2024-11-20T05:31:58.721Z] =================================================================================================================== 00:21:26.886 [2024-11-20T05:31:58.721Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:26.886 [2024-11-20 05:31:58.529817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:26.886 05:31:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75478 00:21:27.144 [2024-11-20 05:31:58.784190] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:21:27.709 00:21:27.709 real 0m16.375s 00:21:27.709 user 0m17.802s 00:21:27.709 sys 0m2.991s 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.709 ************************************ 00:21:27.709 END TEST raid_rebuild_test 00:21:27.709 ************************************ 00:21:27.709 05:31:59 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:21:27.709 05:31:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:27.709 05:31:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:27.709 05:31:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:27.709 ************************************ 00:21:27.709 START TEST raid_rebuild_test_sb 00:21:27.709 ************************************ 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75914 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75914 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75914 ']' 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:27.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:27.709 05:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.710 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:27.710 Zero copy mechanism will not be used. 00:21:27.710 [2024-11-20 05:31:59.507794] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:21:27.710 [2024-11-20 05:31:59.507912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75914 ] 00:21:27.968 [2024-11-20 05:31:59.658479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.968 [2024-11-20 05:31:59.757888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.225 [2024-11-20 05:31:59.879586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.225 [2024-11-20 05:31:59.879627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.790 BaseBdev1_malloc 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.790 [2024-11-20 05:32:00.457641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:28.790 [2024-11-20 05:32:00.457703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.790 [2024-11-20 05:32:00.457724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:28.790 [2024-11-20 05:32:00.457736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.790 [2024-11-20 05:32:00.459605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.790 [2024-11-20 05:32:00.459637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.790 BaseBdev1 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.790 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.791 BaseBdev2_malloc 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.791 [2024-11-20 05:32:00.495354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:28.791 [2024-11-20 05:32:00.495418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.791 [2024-11-20 05:32:00.495435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:28.791 [2024-11-20 05:32:00.495445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.791 [2024-11-20 05:32:00.497301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.791 [2024-11-20 05:32:00.497333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:28.791 BaseBdev2 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.791 BaseBdev3_malloc 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.791 [2024-11-20 05:32:00.548483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:28.791 [2024-11-20 05:32:00.548535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.791 [2024-11-20 05:32:00.548556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:28.791 [2024-11-20 05:32:00.548566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.791 [2024-11-20 05:32:00.550439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.791 [2024-11-20 05:32:00.550471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:28.791 BaseBdev3 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.791 BaseBdev4_malloc 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.791 [2024-11-20 05:32:00.586449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:28.791 [2024-11-20 05:32:00.586495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.791 [2024-11-20 05:32:00.586510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:28.791 [2024-11-20 05:32:00.586521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.791 [2024-11-20 05:32:00.588317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.791 [2024-11-20 05:32:00.588349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:28.791 BaseBdev4 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.791 spare_malloc 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.791 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.049 spare_delay 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.049 [2024-11-20 05:32:00.631931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:29.049 [2024-11-20 05:32:00.631975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.049 [2024-11-20 05:32:00.631989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:29.049 [2024-11-20 05:32:00.631999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.049 [2024-11-20 05:32:00.633844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.049 [2024-11-20 05:32:00.633873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:29.049 spare 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.049 [2024-11-20 05:32:00.639977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.049 [2024-11-20 05:32:00.641583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:29.049 [2024-11-20 05:32:00.641639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:29.049 [2024-11-20 05:32:00.641682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:29.049 [2024-11-20 05:32:00.641833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:29.049 [2024-11-20 05:32:00.641850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:29.049 [2024-11-20 05:32:00.642058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:29.049 [2024-11-20 05:32:00.642198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:29.049 [2024-11-20 05:32:00.642210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:29.049 [2024-11-20 05:32:00.642324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.049 "name": "raid_bdev1", 00:21:29.049 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:29.049 "strip_size_kb": 0, 00:21:29.049 "state": "online", 00:21:29.049 "raid_level": "raid1", 00:21:29.049 "superblock": true, 00:21:29.049 "num_base_bdevs": 4, 00:21:29.049 "num_base_bdevs_discovered": 4, 00:21:29.049 "num_base_bdevs_operational": 4, 00:21:29.049 "base_bdevs_list": [ 00:21:29.049 { 00:21:29.049 "name": "BaseBdev1", 00:21:29.049 "uuid": "b4a64dd5-4742-59e3-a99e-51058b8a743b", 00:21:29.049 "is_configured": true, 00:21:29.049 "data_offset": 2048, 00:21:29.049 "data_size": 63488 00:21:29.049 }, 00:21:29.049 { 00:21:29.049 "name": "BaseBdev2", 00:21:29.049 "uuid": "b86de24d-f6a3-57eb-ac28-b227aab50a80", 00:21:29.049 "is_configured": true, 00:21:29.049 "data_offset": 2048, 00:21:29.049 "data_size": 63488 00:21:29.049 }, 00:21:29.049 { 00:21:29.049 "name": "BaseBdev3", 00:21:29.049 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:29.049 "is_configured": true, 00:21:29.049 "data_offset": 2048, 00:21:29.049 "data_size": 63488 00:21:29.049 }, 00:21:29.049 { 00:21:29.049 "name": "BaseBdev4", 00:21:29.049 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:29.049 "is_configured": true, 00:21:29.049 "data_offset": 2048, 00:21:29.049 "data_size": 63488 00:21:29.049 } 00:21:29.049 ] 00:21:29.049 }' 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.049 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:29.308 [2024-11-20 05:32:00.960436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:29.308 05:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.308 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:29.567 [2024-11-20 05:32:01.220182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:29.567 /dev/nbd0 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:29.567 1+0 records in 00:21:29.567 1+0 records out 00:21:29.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307414 s, 13.3 MB/s 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:29.567 05:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:36.134 63488+0 records in 00:21:36.134 63488+0 records out 00:21:36.134 32505856 bytes (33 MB, 31 MiB) copied, 5.58363 s, 5.8 MB/s 00:21:36.134 05:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:36.134 05:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:36.134 05:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:36.134 05:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:36.134 05:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:36.134 05:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:36.134 05:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:36.134 [2024-11-20 05:32:07.081326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.134 [2024-11-20 05:32:07.109417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.134 "name": "raid_bdev1", 00:21:36.134 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:36.134 "strip_size_kb": 0, 00:21:36.134 "state": "online", 00:21:36.134 "raid_level": "raid1", 00:21:36.134 "superblock": true, 00:21:36.134 "num_base_bdevs": 4, 00:21:36.134 "num_base_bdevs_discovered": 3, 00:21:36.134 "num_base_bdevs_operational": 3, 00:21:36.134 "base_bdevs_list": [ 00:21:36.134 { 00:21:36.134 "name": null, 00:21:36.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.134 "is_configured": false, 00:21:36.134 "data_offset": 0, 00:21:36.134 "data_size": 63488 00:21:36.134 }, 00:21:36.134 { 00:21:36.134 "name": "BaseBdev2", 00:21:36.134 "uuid": "b86de24d-f6a3-57eb-ac28-b227aab50a80", 00:21:36.134 "is_configured": true, 00:21:36.134 "data_offset": 2048, 00:21:36.134 "data_size": 63488 00:21:36.134 }, 00:21:36.134 { 00:21:36.134 "name": "BaseBdev3", 00:21:36.134 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:36.134 "is_configured": true, 00:21:36.134 "data_offset": 2048, 00:21:36.134 "data_size": 63488 00:21:36.134 }, 00:21:36.134 { 00:21:36.134 "name": "BaseBdev4", 00:21:36.134 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:36.134 "is_configured": true, 00:21:36.134 "data_offset": 2048, 00:21:36.134 "data_size": 63488 00:21:36.134 } 00:21:36.134 ] 00:21:36.134 }' 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.134 [2024-11-20 05:32:07.425464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:36.134 [2024-11-20 05:32:07.434135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.134 05:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:36.134 [2024-11-20 05:32:07.435860] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.699 "name": "raid_bdev1", 00:21:36.699 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:36.699 "strip_size_kb": 0, 00:21:36.699 "state": "online", 00:21:36.699 "raid_level": "raid1", 00:21:36.699 "superblock": true, 00:21:36.699 "num_base_bdevs": 4, 00:21:36.699 "num_base_bdevs_discovered": 4, 00:21:36.699 "num_base_bdevs_operational": 4, 00:21:36.699 "process": { 00:21:36.699 "type": "rebuild", 00:21:36.699 "target": "spare", 00:21:36.699 "progress": { 00:21:36.699 "blocks": 20480, 00:21:36.699 "percent": 32 00:21:36.699 } 00:21:36.699 }, 00:21:36.699 "base_bdevs_list": [ 00:21:36.699 { 00:21:36.699 "name": "spare", 00:21:36.699 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:36.699 "is_configured": true, 00:21:36.699 "data_offset": 2048, 00:21:36.699 "data_size": 63488 00:21:36.699 }, 00:21:36.699 { 00:21:36.699 "name": "BaseBdev2", 00:21:36.699 "uuid": "b86de24d-f6a3-57eb-ac28-b227aab50a80", 00:21:36.699 "is_configured": true, 00:21:36.699 "data_offset": 2048, 00:21:36.699 "data_size": 63488 00:21:36.699 }, 00:21:36.699 { 00:21:36.699 "name": "BaseBdev3", 00:21:36.699 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:36.699 "is_configured": true, 00:21:36.699 "data_offset": 2048, 00:21:36.699 "data_size": 63488 00:21:36.699 }, 00:21:36.699 { 00:21:36.699 "name": "BaseBdev4", 00:21:36.699 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:36.699 "is_configured": true, 00:21:36.699 "data_offset": 2048, 00:21:36.699 "data_size": 63488 00:21:36.699 } 00:21:36.699 ] 00:21:36.699 }' 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.699 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.699 [2024-11-20 05:32:08.529869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:36.957 [2024-11-20 05:32:08.542401] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:36.957 [2024-11-20 05:32:08.542491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.957 [2024-11-20 05:32:08.542511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:36.957 [2024-11-20 05:32:08.542521] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.957 "name": "raid_bdev1", 00:21:36.957 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:36.957 "strip_size_kb": 0, 00:21:36.957 "state": "online", 00:21:36.957 "raid_level": "raid1", 00:21:36.957 "superblock": true, 00:21:36.957 "num_base_bdevs": 4, 00:21:36.957 "num_base_bdevs_discovered": 3, 00:21:36.957 "num_base_bdevs_operational": 3, 00:21:36.957 "base_bdevs_list": [ 00:21:36.957 { 00:21:36.957 "name": null, 00:21:36.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.957 "is_configured": false, 00:21:36.957 "data_offset": 0, 00:21:36.957 "data_size": 63488 00:21:36.957 }, 00:21:36.957 { 00:21:36.957 "name": "BaseBdev2", 00:21:36.957 "uuid": "b86de24d-f6a3-57eb-ac28-b227aab50a80", 00:21:36.957 "is_configured": true, 00:21:36.957 "data_offset": 2048, 00:21:36.957 "data_size": 63488 00:21:36.957 }, 00:21:36.957 { 00:21:36.957 "name": "BaseBdev3", 00:21:36.957 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:36.957 "is_configured": true, 00:21:36.957 "data_offset": 2048, 00:21:36.957 "data_size": 63488 00:21:36.957 }, 00:21:36.957 { 00:21:36.957 "name": "BaseBdev4", 00:21:36.957 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:36.957 "is_configured": true, 00:21:36.957 "data_offset": 2048, 00:21:36.957 "data_size": 63488 00:21:36.957 } 00:21:36.957 ] 00:21:36.957 }' 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.957 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.215 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.215 "name": "raid_bdev1", 00:21:37.215 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:37.215 "strip_size_kb": 0, 00:21:37.215 "state": "online", 00:21:37.215 "raid_level": "raid1", 00:21:37.215 "superblock": true, 00:21:37.215 "num_base_bdevs": 4, 00:21:37.215 "num_base_bdevs_discovered": 3, 00:21:37.215 "num_base_bdevs_operational": 3, 00:21:37.215 "base_bdevs_list": [ 00:21:37.215 { 00:21:37.215 "name": null, 00:21:37.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.215 "is_configured": false, 00:21:37.215 "data_offset": 0, 00:21:37.215 "data_size": 63488 00:21:37.215 }, 00:21:37.215 { 00:21:37.215 "name": "BaseBdev2", 00:21:37.215 "uuid": "b86de24d-f6a3-57eb-ac28-b227aab50a80", 00:21:37.215 "is_configured": true, 00:21:37.215 "data_offset": 2048, 00:21:37.215 "data_size": 63488 00:21:37.215 }, 00:21:37.215 { 00:21:37.215 "name": "BaseBdev3", 00:21:37.215 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:37.215 "is_configured": true, 00:21:37.215 "data_offset": 2048, 00:21:37.215 "data_size": 63488 00:21:37.215 }, 00:21:37.215 { 00:21:37.215 "name": "BaseBdev4", 00:21:37.215 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:37.215 "is_configured": true, 00:21:37.215 "data_offset": 2048, 00:21:37.215 "data_size": 63488 00:21:37.216 } 00:21:37.216 ] 00:21:37.216 }' 00:21:37.216 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.216 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:37.216 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.216 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:37.216 05:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:37.216 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.216 05:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.216 [2024-11-20 05:32:08.997649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:37.216 [2024-11-20 05:32:09.005737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:21:37.216 05:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.216 05:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:37.216 [2024-11-20 05:32:09.007401] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.614 "name": "raid_bdev1", 00:21:38.614 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:38.614 "strip_size_kb": 0, 00:21:38.614 "state": "online", 00:21:38.614 "raid_level": "raid1", 00:21:38.614 "superblock": true, 00:21:38.614 "num_base_bdevs": 4, 00:21:38.614 "num_base_bdevs_discovered": 4, 00:21:38.614 "num_base_bdevs_operational": 4, 00:21:38.614 "process": { 00:21:38.614 "type": "rebuild", 00:21:38.614 "target": "spare", 00:21:38.614 "progress": { 00:21:38.614 "blocks": 20480, 00:21:38.614 "percent": 32 00:21:38.614 } 00:21:38.614 }, 00:21:38.614 "base_bdevs_list": [ 00:21:38.614 { 00:21:38.614 "name": "spare", 00:21:38.614 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:38.614 "is_configured": true, 00:21:38.614 "data_offset": 2048, 00:21:38.614 "data_size": 63488 00:21:38.614 }, 00:21:38.614 { 00:21:38.614 "name": "BaseBdev2", 00:21:38.614 "uuid": "b86de24d-f6a3-57eb-ac28-b227aab50a80", 00:21:38.614 "is_configured": true, 00:21:38.614 "data_offset": 2048, 00:21:38.614 "data_size": 63488 00:21:38.614 }, 00:21:38.614 { 00:21:38.614 "name": "BaseBdev3", 00:21:38.614 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:38.614 "is_configured": true, 00:21:38.614 "data_offset": 2048, 00:21:38.614 "data_size": 63488 00:21:38.614 }, 00:21:38.614 { 00:21:38.614 "name": "BaseBdev4", 00:21:38.614 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:38.614 "is_configured": true, 00:21:38.614 "data_offset": 2048, 00:21:38.614 "data_size": 63488 00:21:38.614 } 00:21:38.614 ] 00:21:38.614 }' 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:38.614 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.614 [2024-11-20 05:32:10.113354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:38.614 [2024-11-20 05:32:10.313328] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.614 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.615 "name": "raid_bdev1", 00:21:38.615 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:38.615 "strip_size_kb": 0, 00:21:38.615 "state": "online", 00:21:38.615 "raid_level": "raid1", 00:21:38.615 "superblock": true, 00:21:38.615 "num_base_bdevs": 4, 00:21:38.615 "num_base_bdevs_discovered": 3, 00:21:38.615 "num_base_bdevs_operational": 3, 00:21:38.615 "process": { 00:21:38.615 "type": "rebuild", 00:21:38.615 "target": "spare", 00:21:38.615 "progress": { 00:21:38.615 "blocks": 24576, 00:21:38.615 "percent": 38 00:21:38.615 } 00:21:38.615 }, 00:21:38.615 "base_bdevs_list": [ 00:21:38.615 { 00:21:38.615 "name": "spare", 00:21:38.615 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:38.615 "is_configured": true, 00:21:38.615 "data_offset": 2048, 00:21:38.615 "data_size": 63488 00:21:38.615 }, 00:21:38.615 { 00:21:38.615 "name": null, 00:21:38.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.615 "is_configured": false, 00:21:38.615 "data_offset": 0, 00:21:38.615 "data_size": 63488 00:21:38.615 }, 00:21:38.615 { 00:21:38.615 "name": "BaseBdev3", 00:21:38.615 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:38.615 "is_configured": true, 00:21:38.615 "data_offset": 2048, 00:21:38.615 "data_size": 63488 00:21:38.615 }, 00:21:38.615 { 00:21:38.615 "name": "BaseBdev4", 00:21:38.615 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:38.615 "is_configured": true, 00:21:38.615 "data_offset": 2048, 00:21:38.615 "data_size": 63488 00:21:38.615 } 00:21:38.615 ] 00:21:38.615 }' 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=365 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.615 05:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.876 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.876 "name": "raid_bdev1", 00:21:38.876 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:38.876 "strip_size_kb": 0, 00:21:38.876 "state": "online", 00:21:38.876 "raid_level": "raid1", 00:21:38.876 "superblock": true, 00:21:38.876 "num_base_bdevs": 4, 00:21:38.876 "num_base_bdevs_discovered": 3, 00:21:38.876 "num_base_bdevs_operational": 3, 00:21:38.876 "process": { 00:21:38.876 "type": "rebuild", 00:21:38.876 "target": "spare", 00:21:38.876 "progress": { 00:21:38.876 "blocks": 26624, 00:21:38.876 "percent": 41 00:21:38.876 } 00:21:38.876 }, 00:21:38.876 "base_bdevs_list": [ 00:21:38.876 { 00:21:38.876 "name": "spare", 00:21:38.876 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:38.876 "is_configured": true, 00:21:38.876 "data_offset": 2048, 00:21:38.876 "data_size": 63488 00:21:38.876 }, 00:21:38.876 { 00:21:38.876 "name": null, 00:21:38.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.876 "is_configured": false, 00:21:38.876 "data_offset": 0, 00:21:38.876 "data_size": 63488 00:21:38.876 }, 00:21:38.876 { 00:21:38.876 "name": "BaseBdev3", 00:21:38.876 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:38.876 "is_configured": true, 00:21:38.876 "data_offset": 2048, 00:21:38.876 "data_size": 63488 00:21:38.876 }, 00:21:38.876 { 00:21:38.876 "name": "BaseBdev4", 00:21:38.876 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:38.876 "is_configured": true, 00:21:38.876 "data_offset": 2048, 00:21:38.876 "data_size": 63488 00:21:38.876 } 00:21:38.876 ] 00:21:38.876 }' 00:21:38.876 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.876 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.876 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.876 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.876 05:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.809 "name": "raid_bdev1", 00:21:39.809 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:39.809 "strip_size_kb": 0, 00:21:39.809 "state": "online", 00:21:39.809 "raid_level": "raid1", 00:21:39.809 "superblock": true, 00:21:39.809 "num_base_bdevs": 4, 00:21:39.809 "num_base_bdevs_discovered": 3, 00:21:39.809 "num_base_bdevs_operational": 3, 00:21:39.809 "process": { 00:21:39.809 "type": "rebuild", 00:21:39.809 "target": "spare", 00:21:39.809 "progress": { 00:21:39.809 "blocks": 49152, 00:21:39.809 "percent": 77 00:21:39.809 } 00:21:39.809 }, 00:21:39.809 "base_bdevs_list": [ 00:21:39.809 { 00:21:39.809 "name": "spare", 00:21:39.809 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:39.809 "is_configured": true, 00:21:39.809 "data_offset": 2048, 00:21:39.809 "data_size": 63488 00:21:39.809 }, 00:21:39.809 { 00:21:39.809 "name": null, 00:21:39.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.809 "is_configured": false, 00:21:39.809 "data_offset": 0, 00:21:39.809 "data_size": 63488 00:21:39.809 }, 00:21:39.809 { 00:21:39.809 "name": "BaseBdev3", 00:21:39.809 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:39.809 "is_configured": true, 00:21:39.809 "data_offset": 2048, 00:21:39.809 "data_size": 63488 00:21:39.809 }, 00:21:39.809 { 00:21:39.809 "name": "BaseBdev4", 00:21:39.809 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:39.809 "is_configured": true, 00:21:39.809 "data_offset": 2048, 00:21:39.809 "data_size": 63488 00:21:39.809 } 00:21:39.809 ] 00:21:39.809 }' 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:39.809 05:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:40.742 [2024-11-20 05:32:12.222284] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:40.742 [2024-11-20 05:32:12.222358] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:40.742 [2024-11-20 05:32:12.222477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.001 "name": "raid_bdev1", 00:21:41.001 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:41.001 "strip_size_kb": 0, 00:21:41.001 "state": "online", 00:21:41.001 "raid_level": "raid1", 00:21:41.001 "superblock": true, 00:21:41.001 "num_base_bdevs": 4, 00:21:41.001 "num_base_bdevs_discovered": 3, 00:21:41.001 "num_base_bdevs_operational": 3, 00:21:41.001 "base_bdevs_list": [ 00:21:41.001 { 00:21:41.001 "name": "spare", 00:21:41.001 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:41.001 "is_configured": true, 00:21:41.001 "data_offset": 2048, 00:21:41.001 "data_size": 63488 00:21:41.001 }, 00:21:41.001 { 00:21:41.001 "name": null, 00:21:41.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.001 "is_configured": false, 00:21:41.001 "data_offset": 0, 00:21:41.001 "data_size": 63488 00:21:41.001 }, 00:21:41.001 { 00:21:41.001 "name": "BaseBdev3", 00:21:41.001 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:41.001 "is_configured": true, 00:21:41.001 "data_offset": 2048, 00:21:41.001 "data_size": 63488 00:21:41.001 }, 00:21:41.001 { 00:21:41.001 "name": "BaseBdev4", 00:21:41.001 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:41.001 "is_configured": true, 00:21:41.001 "data_offset": 2048, 00:21:41.001 "data_size": 63488 00:21:41.001 } 00:21:41.001 ] 00:21:41.001 }' 00:21:41.001 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.002 "name": "raid_bdev1", 00:21:41.002 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:41.002 "strip_size_kb": 0, 00:21:41.002 "state": "online", 00:21:41.002 "raid_level": "raid1", 00:21:41.002 "superblock": true, 00:21:41.002 "num_base_bdevs": 4, 00:21:41.002 "num_base_bdevs_discovered": 3, 00:21:41.002 "num_base_bdevs_operational": 3, 00:21:41.002 "base_bdevs_list": [ 00:21:41.002 { 00:21:41.002 "name": "spare", 00:21:41.002 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:41.002 "is_configured": true, 00:21:41.002 "data_offset": 2048, 00:21:41.002 "data_size": 63488 00:21:41.002 }, 00:21:41.002 { 00:21:41.002 "name": null, 00:21:41.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.002 "is_configured": false, 00:21:41.002 "data_offset": 0, 00:21:41.002 "data_size": 63488 00:21:41.002 }, 00:21:41.002 { 00:21:41.002 "name": "BaseBdev3", 00:21:41.002 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:41.002 "is_configured": true, 00:21:41.002 "data_offset": 2048, 00:21:41.002 "data_size": 63488 00:21:41.002 }, 00:21:41.002 { 00:21:41.002 "name": "BaseBdev4", 00:21:41.002 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:41.002 "is_configured": true, 00:21:41.002 "data_offset": 2048, 00:21:41.002 "data_size": 63488 00:21:41.002 } 00:21:41.002 ] 00:21:41.002 }' 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.002 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.260 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.260 "name": "raid_bdev1", 00:21:41.260 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:41.260 "strip_size_kb": 0, 00:21:41.260 "state": "online", 00:21:41.260 "raid_level": "raid1", 00:21:41.260 "superblock": true, 00:21:41.260 "num_base_bdevs": 4, 00:21:41.260 "num_base_bdevs_discovered": 3, 00:21:41.260 "num_base_bdevs_operational": 3, 00:21:41.260 "base_bdevs_list": [ 00:21:41.260 { 00:21:41.260 "name": "spare", 00:21:41.260 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:41.260 "is_configured": true, 00:21:41.260 "data_offset": 2048, 00:21:41.260 "data_size": 63488 00:21:41.260 }, 00:21:41.260 { 00:21:41.260 "name": null, 00:21:41.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.260 "is_configured": false, 00:21:41.261 "data_offset": 0, 00:21:41.261 "data_size": 63488 00:21:41.261 }, 00:21:41.261 { 00:21:41.261 "name": "BaseBdev3", 00:21:41.261 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:41.261 "is_configured": true, 00:21:41.261 "data_offset": 2048, 00:21:41.261 "data_size": 63488 00:21:41.261 }, 00:21:41.261 { 00:21:41.261 "name": "BaseBdev4", 00:21:41.261 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:41.261 "is_configured": true, 00:21:41.261 "data_offset": 2048, 00:21:41.261 "data_size": 63488 00:21:41.261 } 00:21:41.261 ] 00:21:41.261 }' 00:21:41.261 05:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.261 05:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 [2024-11-20 05:32:13.147110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.518 [2024-11-20 05:32:13.147142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:41.518 [2024-11-20 05:32:13.147207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.518 [2024-11-20 05:32:13.147273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:41.518 [2024-11-20 05:32:13.147281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:41.518 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:41.519 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:41.519 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:41.519 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:41.519 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:41.519 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:41.519 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:41.519 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:41.776 /dev/nbd0 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.776 1+0 records in 00:21:41.776 1+0 records out 00:21:41.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271617 s, 15.1 MB/s 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:41.776 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:42.034 /dev/nbd1 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:42.034 1+0 records in 00:21:42.034 1+0 records out 00:21:42.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267331 s, 15.3 MB/s 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.034 05:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:42.291 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:42.291 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:42.291 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:42.291 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.292 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.292 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:42.292 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:42.292 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.292 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.292 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.550 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.551 [2024-11-20 05:32:14.244536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:42.551 [2024-11-20 05:32:14.244584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.551 [2024-11-20 05:32:14.244603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:42.551 [2024-11-20 05:32:14.244611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.551 [2024-11-20 05:32:14.246470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.551 [2024-11-20 05:32:14.246499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:42.551 [2024-11-20 05:32:14.246572] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:42.551 [2024-11-20 05:32:14.246609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:42.551 [2024-11-20 05:32:14.246715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:42.551 [2024-11-20 05:32:14.246791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:42.551 spare 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.551 [2024-11-20 05:32:14.346876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:42.551 [2024-11-20 05:32:14.346915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:42.551 [2024-11-20 05:32:14.347203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:21:42.551 [2024-11-20 05:32:14.347379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:42.551 [2024-11-20 05:32:14.347397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:42.551 [2024-11-20 05:32:14.347537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.551 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.808 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.808 "name": "raid_bdev1", 00:21:42.808 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:42.808 "strip_size_kb": 0, 00:21:42.808 "state": "online", 00:21:42.808 "raid_level": "raid1", 00:21:42.808 "superblock": true, 00:21:42.808 "num_base_bdevs": 4, 00:21:42.808 "num_base_bdevs_discovered": 3, 00:21:42.808 "num_base_bdevs_operational": 3, 00:21:42.808 "base_bdevs_list": [ 00:21:42.808 { 00:21:42.808 "name": "spare", 00:21:42.808 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:42.808 "is_configured": true, 00:21:42.808 "data_offset": 2048, 00:21:42.808 "data_size": 63488 00:21:42.808 }, 00:21:42.808 { 00:21:42.808 "name": null, 00:21:42.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.808 "is_configured": false, 00:21:42.808 "data_offset": 2048, 00:21:42.808 "data_size": 63488 00:21:42.808 }, 00:21:42.808 { 00:21:42.808 "name": "BaseBdev3", 00:21:42.808 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:42.808 "is_configured": true, 00:21:42.808 "data_offset": 2048, 00:21:42.808 "data_size": 63488 00:21:42.808 }, 00:21:42.808 { 00:21:42.808 "name": "BaseBdev4", 00:21:42.808 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:42.808 "is_configured": true, 00:21:42.809 "data_offset": 2048, 00:21:42.809 "data_size": 63488 00:21:42.809 } 00:21:42.809 ] 00:21:42.809 }' 00:21:42.809 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.809 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.067 "name": "raid_bdev1", 00:21:43.067 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:43.067 "strip_size_kb": 0, 00:21:43.067 "state": "online", 00:21:43.067 "raid_level": "raid1", 00:21:43.067 "superblock": true, 00:21:43.067 "num_base_bdevs": 4, 00:21:43.067 "num_base_bdevs_discovered": 3, 00:21:43.067 "num_base_bdevs_operational": 3, 00:21:43.067 "base_bdevs_list": [ 00:21:43.067 { 00:21:43.067 "name": "spare", 00:21:43.067 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:43.067 "is_configured": true, 00:21:43.067 "data_offset": 2048, 00:21:43.067 "data_size": 63488 00:21:43.067 }, 00:21:43.067 { 00:21:43.067 "name": null, 00:21:43.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.067 "is_configured": false, 00:21:43.067 "data_offset": 2048, 00:21:43.067 "data_size": 63488 00:21:43.067 }, 00:21:43.067 { 00:21:43.067 "name": "BaseBdev3", 00:21:43.067 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:43.067 "is_configured": true, 00:21:43.067 "data_offset": 2048, 00:21:43.067 "data_size": 63488 00:21:43.067 }, 00:21:43.067 { 00:21:43.067 "name": "BaseBdev4", 00:21:43.067 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:43.067 "is_configured": true, 00:21:43.067 "data_offset": 2048, 00:21:43.067 "data_size": 63488 00:21:43.067 } 00:21:43.067 ] 00:21:43.067 }' 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.067 [2024-11-20 05:32:14.808719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.067 "name": "raid_bdev1", 00:21:43.067 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:43.067 "strip_size_kb": 0, 00:21:43.067 "state": "online", 00:21:43.067 "raid_level": "raid1", 00:21:43.067 "superblock": true, 00:21:43.067 "num_base_bdevs": 4, 00:21:43.067 "num_base_bdevs_discovered": 2, 00:21:43.067 "num_base_bdevs_operational": 2, 00:21:43.067 "base_bdevs_list": [ 00:21:43.067 { 00:21:43.067 "name": null, 00:21:43.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.067 "is_configured": false, 00:21:43.067 "data_offset": 0, 00:21:43.067 "data_size": 63488 00:21:43.067 }, 00:21:43.067 { 00:21:43.067 "name": null, 00:21:43.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.067 "is_configured": false, 00:21:43.067 "data_offset": 2048, 00:21:43.067 "data_size": 63488 00:21:43.067 }, 00:21:43.067 { 00:21:43.067 "name": "BaseBdev3", 00:21:43.067 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:43.067 "is_configured": true, 00:21:43.067 "data_offset": 2048, 00:21:43.067 "data_size": 63488 00:21:43.067 }, 00:21:43.067 { 00:21:43.067 "name": "BaseBdev4", 00:21:43.067 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:43.067 "is_configured": true, 00:21:43.067 "data_offset": 2048, 00:21:43.067 "data_size": 63488 00:21:43.067 } 00:21:43.067 ] 00:21:43.067 }' 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.067 05:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.325 05:32:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.325 05:32:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.325 05:32:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.325 [2024-11-20 05:32:15.136763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.325 [2024-11-20 05:32:15.136907] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:43.325 [2024-11-20 05:32:15.136924] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:43.325 [2024-11-20 05:32:15.136950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.325 [2024-11-20 05:32:15.144406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:21:43.325 05:32:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.325 05:32:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:43.325 [2024-11-20 05:32:15.145985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.750 "name": "raid_bdev1", 00:21:44.750 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:44.750 "strip_size_kb": 0, 00:21:44.750 "state": "online", 00:21:44.750 "raid_level": "raid1", 00:21:44.750 "superblock": true, 00:21:44.750 "num_base_bdevs": 4, 00:21:44.750 "num_base_bdevs_discovered": 3, 00:21:44.750 "num_base_bdevs_operational": 3, 00:21:44.750 "process": { 00:21:44.750 "type": "rebuild", 00:21:44.750 "target": "spare", 00:21:44.750 "progress": { 00:21:44.750 "blocks": 20480, 00:21:44.750 "percent": 32 00:21:44.750 } 00:21:44.750 }, 00:21:44.750 "base_bdevs_list": [ 00:21:44.750 { 00:21:44.750 "name": "spare", 00:21:44.750 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:44.750 "is_configured": true, 00:21:44.750 "data_offset": 2048, 00:21:44.750 "data_size": 63488 00:21:44.750 }, 00:21:44.750 { 00:21:44.750 "name": null, 00:21:44.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.750 "is_configured": false, 00:21:44.750 "data_offset": 2048, 00:21:44.750 "data_size": 63488 00:21:44.750 }, 00:21:44.750 { 00:21:44.750 "name": "BaseBdev3", 00:21:44.750 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:44.750 "is_configured": true, 00:21:44.750 "data_offset": 2048, 00:21:44.750 "data_size": 63488 00:21:44.750 }, 00:21:44.750 { 00:21:44.750 "name": "BaseBdev4", 00:21:44.750 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:44.750 "is_configured": true, 00:21:44.750 "data_offset": 2048, 00:21:44.750 "data_size": 63488 00:21:44.750 } 00:21:44.750 ] 00:21:44.750 }' 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.750 [2024-11-20 05:32:16.256282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.750 [2024-11-20 05:32:16.351481] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:44.750 [2024-11-20 05:32:16.351548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.750 [2024-11-20 05:32:16.351563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.750 [2024-11-20 05:32:16.351569] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.750 "name": "raid_bdev1", 00:21:44.750 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:44.750 "strip_size_kb": 0, 00:21:44.750 "state": "online", 00:21:44.750 "raid_level": "raid1", 00:21:44.750 "superblock": true, 00:21:44.750 "num_base_bdevs": 4, 00:21:44.750 "num_base_bdevs_discovered": 2, 00:21:44.750 "num_base_bdevs_operational": 2, 00:21:44.750 "base_bdevs_list": [ 00:21:44.750 { 00:21:44.750 "name": null, 00:21:44.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.750 "is_configured": false, 00:21:44.750 "data_offset": 0, 00:21:44.750 "data_size": 63488 00:21:44.750 }, 00:21:44.750 { 00:21:44.750 "name": null, 00:21:44.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.750 "is_configured": false, 00:21:44.750 "data_offset": 2048, 00:21:44.750 "data_size": 63488 00:21:44.750 }, 00:21:44.750 { 00:21:44.750 "name": "BaseBdev3", 00:21:44.750 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:44.750 "is_configured": true, 00:21:44.750 "data_offset": 2048, 00:21:44.750 "data_size": 63488 00:21:44.750 }, 00:21:44.750 { 00:21:44.750 "name": "BaseBdev4", 00:21:44.750 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:44.750 "is_configured": true, 00:21:44.750 "data_offset": 2048, 00:21:44.750 "data_size": 63488 00:21:44.750 } 00:21:44.750 ] 00:21:44.750 }' 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.750 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.009 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:45.009 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.009 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.009 [2024-11-20 05:32:16.655822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:45.009 [2024-11-20 05:32:16.655923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.009 [2024-11-20 05:32:16.655955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:45.009 [2024-11-20 05:32:16.655966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.009 [2024-11-20 05:32:16.656518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.009 [2024-11-20 05:32:16.656544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:45.009 [2024-11-20 05:32:16.656654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:45.009 [2024-11-20 05:32:16.656669] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:45.009 [2024-11-20 05:32:16.656685] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:45.009 [2024-11-20 05:32:16.656710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.009 [2024-11-20 05:32:16.666836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:21:45.009 spare 00:21:45.009 05:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.009 05:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:45.009 [2024-11-20 05:32:16.668981] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.968 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:45.968 "name": "raid_bdev1", 00:21:45.968 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:45.968 "strip_size_kb": 0, 00:21:45.968 "state": "online", 00:21:45.968 "raid_level": "raid1", 00:21:45.968 "superblock": true, 00:21:45.968 "num_base_bdevs": 4, 00:21:45.968 "num_base_bdevs_discovered": 3, 00:21:45.968 "num_base_bdevs_operational": 3, 00:21:45.968 "process": { 00:21:45.969 "type": "rebuild", 00:21:45.969 "target": "spare", 00:21:45.969 "progress": { 00:21:45.969 "blocks": 20480, 00:21:45.969 "percent": 32 00:21:45.969 } 00:21:45.969 }, 00:21:45.969 "base_bdevs_list": [ 00:21:45.969 { 00:21:45.969 "name": "spare", 00:21:45.969 "uuid": "1d0be8b1-73b9-513f-b1bc-e6e727fc263c", 00:21:45.969 "is_configured": true, 00:21:45.969 "data_offset": 2048, 00:21:45.969 "data_size": 63488 00:21:45.969 }, 00:21:45.969 { 00:21:45.969 "name": null, 00:21:45.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.969 "is_configured": false, 00:21:45.969 "data_offset": 2048, 00:21:45.969 "data_size": 63488 00:21:45.969 }, 00:21:45.969 { 00:21:45.969 "name": "BaseBdev3", 00:21:45.969 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:45.969 "is_configured": true, 00:21:45.969 "data_offset": 2048, 00:21:45.969 "data_size": 63488 00:21:45.969 }, 00:21:45.969 { 00:21:45.969 "name": "BaseBdev4", 00:21:45.969 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:45.969 "is_configured": true, 00:21:45.969 "data_offset": 2048, 00:21:45.969 "data_size": 63488 00:21:45.969 } 00:21:45.969 ] 00:21:45.969 }' 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.969 [2024-11-20 05:32:17.770540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:45.969 [2024-11-20 05:32:17.776054] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:45.969 [2024-11-20 05:32:17.776129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.969 [2024-11-20 05:32:17.776146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:45.969 [2024-11-20 05:32:17.776157] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.969 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.228 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.228 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.228 "name": "raid_bdev1", 00:21:46.228 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:46.228 "strip_size_kb": 0, 00:21:46.228 "state": "online", 00:21:46.228 "raid_level": "raid1", 00:21:46.228 "superblock": true, 00:21:46.228 "num_base_bdevs": 4, 00:21:46.228 "num_base_bdevs_discovered": 2, 00:21:46.228 "num_base_bdevs_operational": 2, 00:21:46.228 "base_bdevs_list": [ 00:21:46.228 { 00:21:46.228 "name": null, 00:21:46.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.228 "is_configured": false, 00:21:46.228 "data_offset": 0, 00:21:46.228 "data_size": 63488 00:21:46.228 }, 00:21:46.228 { 00:21:46.228 "name": null, 00:21:46.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.228 "is_configured": false, 00:21:46.228 "data_offset": 2048, 00:21:46.228 "data_size": 63488 00:21:46.228 }, 00:21:46.228 { 00:21:46.228 "name": "BaseBdev3", 00:21:46.228 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:46.228 "is_configured": true, 00:21:46.228 "data_offset": 2048, 00:21:46.228 "data_size": 63488 00:21:46.228 }, 00:21:46.228 { 00:21:46.228 "name": "BaseBdev4", 00:21:46.228 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:46.228 "is_configured": true, 00:21:46.228 "data_offset": 2048, 00:21:46.228 "data_size": 63488 00:21:46.228 } 00:21:46.228 ] 00:21:46.228 }' 00:21:46.228 05:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.228 05:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.487 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.487 "name": "raid_bdev1", 00:21:46.487 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:46.487 "strip_size_kb": 0, 00:21:46.487 "state": "online", 00:21:46.487 "raid_level": "raid1", 00:21:46.487 "superblock": true, 00:21:46.487 "num_base_bdevs": 4, 00:21:46.487 "num_base_bdevs_discovered": 2, 00:21:46.487 "num_base_bdevs_operational": 2, 00:21:46.487 "base_bdevs_list": [ 00:21:46.487 { 00:21:46.487 "name": null, 00:21:46.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.487 "is_configured": false, 00:21:46.487 "data_offset": 0, 00:21:46.487 "data_size": 63488 00:21:46.487 }, 00:21:46.487 { 00:21:46.487 "name": null, 00:21:46.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.487 "is_configured": false, 00:21:46.487 "data_offset": 2048, 00:21:46.487 "data_size": 63488 00:21:46.487 }, 00:21:46.487 { 00:21:46.487 "name": "BaseBdev3", 00:21:46.487 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:46.487 "is_configured": true, 00:21:46.487 "data_offset": 2048, 00:21:46.487 "data_size": 63488 00:21:46.487 }, 00:21:46.487 { 00:21:46.487 "name": "BaseBdev4", 00:21:46.488 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:46.488 "is_configured": true, 00:21:46.488 "data_offset": 2048, 00:21:46.488 "data_size": 63488 00:21:46.488 } 00:21:46.488 ] 00:21:46.488 }' 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.488 [2024-11-20 05:32:18.223358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:46.488 [2024-11-20 05:32:18.223457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.488 [2024-11-20 05:32:18.223480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:46.488 [2024-11-20 05:32:18.223492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.488 [2024-11-20 05:32:18.224015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.488 [2024-11-20 05:32:18.224046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:46.488 [2024-11-20 05:32:18.224136] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:46.488 [2024-11-20 05:32:18.224153] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:46.488 [2024-11-20 05:32:18.224162] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:46.488 [2024-11-20 05:32:18.224177] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:46.488 BaseBdev1 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.488 05:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.422 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.679 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.679 "name": "raid_bdev1", 00:21:47.679 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:47.679 "strip_size_kb": 0, 00:21:47.679 "state": "online", 00:21:47.679 "raid_level": "raid1", 00:21:47.679 "superblock": true, 00:21:47.679 "num_base_bdevs": 4, 00:21:47.679 "num_base_bdevs_discovered": 2, 00:21:47.679 "num_base_bdevs_operational": 2, 00:21:47.679 "base_bdevs_list": [ 00:21:47.679 { 00:21:47.679 "name": null, 00:21:47.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.679 "is_configured": false, 00:21:47.679 "data_offset": 0, 00:21:47.679 "data_size": 63488 00:21:47.679 }, 00:21:47.679 { 00:21:47.679 "name": null, 00:21:47.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.679 "is_configured": false, 00:21:47.679 "data_offset": 2048, 00:21:47.679 "data_size": 63488 00:21:47.679 }, 00:21:47.679 { 00:21:47.679 "name": "BaseBdev3", 00:21:47.679 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:47.679 "is_configured": true, 00:21:47.679 "data_offset": 2048, 00:21:47.679 "data_size": 63488 00:21:47.679 }, 00:21:47.679 { 00:21:47.679 "name": "BaseBdev4", 00:21:47.679 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:47.679 "is_configured": true, 00:21:47.679 "data_offset": 2048, 00:21:47.679 "data_size": 63488 00:21:47.679 } 00:21:47.679 ] 00:21:47.679 }' 00:21:47.679 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.679 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:47.937 "name": "raid_bdev1", 00:21:47.937 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:47.937 "strip_size_kb": 0, 00:21:47.937 "state": "online", 00:21:47.937 "raid_level": "raid1", 00:21:47.937 "superblock": true, 00:21:47.937 "num_base_bdevs": 4, 00:21:47.937 "num_base_bdevs_discovered": 2, 00:21:47.937 "num_base_bdevs_operational": 2, 00:21:47.937 "base_bdevs_list": [ 00:21:47.937 { 00:21:47.937 "name": null, 00:21:47.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.937 "is_configured": false, 00:21:47.937 "data_offset": 0, 00:21:47.937 "data_size": 63488 00:21:47.937 }, 00:21:47.937 { 00:21:47.937 "name": null, 00:21:47.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.937 "is_configured": false, 00:21:47.937 "data_offset": 2048, 00:21:47.937 "data_size": 63488 00:21:47.937 }, 00:21:47.937 { 00:21:47.937 "name": "BaseBdev3", 00:21:47.937 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:47.937 "is_configured": true, 00:21:47.937 "data_offset": 2048, 00:21:47.937 "data_size": 63488 00:21:47.937 }, 00:21:47.937 { 00:21:47.937 "name": "BaseBdev4", 00:21:47.937 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:47.937 "is_configured": true, 00:21:47.937 "data_offset": 2048, 00:21:47.937 "data_size": 63488 00:21:47.937 } 00:21:47.937 ] 00:21:47.937 }' 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.937 [2024-11-20 05:32:19.639692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:47.937 [2024-11-20 05:32:19.639900] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:47.937 [2024-11-20 05:32:19.639915] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:47.937 request: 00:21:47.937 { 00:21:47.937 "base_bdev": "BaseBdev1", 00:21:47.937 "raid_bdev": "raid_bdev1", 00:21:47.937 "method": "bdev_raid_add_base_bdev", 00:21:47.937 "req_id": 1 00:21:47.937 } 00:21:47.937 Got JSON-RPC error response 00:21:47.937 response: 00:21:47.937 { 00:21:47.937 "code": -22, 00:21:47.937 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:47.937 } 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.937 05:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.871 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.872 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.872 05:32:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.872 05:32:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.872 05:32:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.872 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.872 "name": "raid_bdev1", 00:21:48.872 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:48.872 "strip_size_kb": 0, 00:21:48.872 "state": "online", 00:21:48.872 "raid_level": "raid1", 00:21:48.872 "superblock": true, 00:21:48.872 "num_base_bdevs": 4, 00:21:48.872 "num_base_bdevs_discovered": 2, 00:21:48.872 "num_base_bdevs_operational": 2, 00:21:48.872 "base_bdevs_list": [ 00:21:48.872 { 00:21:48.872 "name": null, 00:21:48.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.872 "is_configured": false, 00:21:48.872 "data_offset": 0, 00:21:48.872 "data_size": 63488 00:21:48.872 }, 00:21:48.872 { 00:21:48.872 "name": null, 00:21:48.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.872 "is_configured": false, 00:21:48.872 "data_offset": 2048, 00:21:48.872 "data_size": 63488 00:21:48.872 }, 00:21:48.872 { 00:21:48.872 "name": "BaseBdev3", 00:21:48.872 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:48.872 "is_configured": true, 00:21:48.872 "data_offset": 2048, 00:21:48.872 "data_size": 63488 00:21:48.872 }, 00:21:48.872 { 00:21:48.872 "name": "BaseBdev4", 00:21:48.872 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:48.872 "is_configured": true, 00:21:48.872 "data_offset": 2048, 00:21:48.872 "data_size": 63488 00:21:48.872 } 00:21:48.872 ] 00:21:48.872 }' 00:21:48.872 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.872 05:32:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.130 05:32:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.431 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:49.431 "name": "raid_bdev1", 00:21:49.431 "uuid": "1035573a-70be-4bbe-b72a-0d08d960c99b", 00:21:49.431 "strip_size_kb": 0, 00:21:49.431 "state": "online", 00:21:49.431 "raid_level": "raid1", 00:21:49.431 "superblock": true, 00:21:49.431 "num_base_bdevs": 4, 00:21:49.431 "num_base_bdevs_discovered": 2, 00:21:49.431 "num_base_bdevs_operational": 2, 00:21:49.431 "base_bdevs_list": [ 00:21:49.431 { 00:21:49.431 "name": null, 00:21:49.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.431 "is_configured": false, 00:21:49.431 "data_offset": 0, 00:21:49.431 "data_size": 63488 00:21:49.431 }, 00:21:49.431 { 00:21:49.431 "name": null, 00:21:49.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.431 "is_configured": false, 00:21:49.431 "data_offset": 2048, 00:21:49.431 "data_size": 63488 00:21:49.431 }, 00:21:49.431 { 00:21:49.431 "name": "BaseBdev3", 00:21:49.431 "uuid": "5575242c-cedc-5c48-9479-b8f1527a1ffe", 00:21:49.431 "is_configured": true, 00:21:49.431 "data_offset": 2048, 00:21:49.431 "data_size": 63488 00:21:49.431 }, 00:21:49.431 { 00:21:49.431 "name": "BaseBdev4", 00:21:49.431 "uuid": "3350d2c2-80a5-59e7-a178-517709a90394", 00:21:49.431 "is_configured": true, 00:21:49.431 "data_offset": 2048, 00:21:49.431 "data_size": 63488 00:21:49.431 } 00:21:49.431 ] 00:21:49.431 }' 00:21:49.431 05:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75914 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75914 ']' 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75914 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75914 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:49.431 killing process with pid 75914 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75914' 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75914 00:21:49.431 Received shutdown signal, test time was about 60.000000 seconds 00:21:49.431 00:21:49.431 Latency(us) 00:21:49.431 [2024-11-20T05:32:21.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.431 [2024-11-20T05:32:21.266Z] =================================================================================================================== 00:21:49.431 [2024-11-20T05:32:21.266Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:49.431 [2024-11-20 05:32:21.071508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:49.431 [2024-11-20 05:32:21.071621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.431 [2024-11-20 05:32:21.071699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.431 [2024-11-20 05:32:21.071710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:49.431 05:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75914 00:21:49.689 [2024-11-20 05:32:21.376149] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:50.256 05:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:50.256 00:21:50.256 real 0m22.639s 00:21:50.256 user 0m25.995s 00:21:50.256 sys 0m3.334s 00:21:50.256 05:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:50.256 05:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.256 ************************************ 00:21:50.256 END TEST raid_rebuild_test_sb 00:21:50.256 ************************************ 00:21:50.515 05:32:22 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:21:50.515 05:32:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:50.515 05:32:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:50.515 05:32:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:50.515 ************************************ 00:21:50.515 START TEST raid_rebuild_test_io 00:21:50.515 ************************************ 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76654 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76654 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76654 ']' 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:50.515 05:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.515 [2024-11-20 05:32:22.186912] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:21:50.515 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:50.515 Zero copy mechanism will not be used. 00:21:50.515 [2024-11-20 05:32:22.187016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76654 ] 00:21:50.515 [2024-11-20 05:32:22.333732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.772 [2024-11-20 05:32:22.434869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.772 [2024-11-20 05:32:22.572651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:50.772 [2024-11-20 05:32:22.572697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.339 BaseBdev1_malloc 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.339 [2024-11-20 05:32:23.085886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:51.339 [2024-11-20 05:32:23.085962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.339 [2024-11-20 05:32:23.085985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:51.339 [2024-11-20 05:32:23.085997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.339 [2024-11-20 05:32:23.088181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.339 [2024-11-20 05:32:23.088222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:51.339 BaseBdev1 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.339 BaseBdev2_malloc 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.339 [2024-11-20 05:32:23.125680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:51.339 [2024-11-20 05:32:23.125751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.339 [2024-11-20 05:32:23.125772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:51.339 [2024-11-20 05:32:23.125783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.339 [2024-11-20 05:32:23.127939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.339 [2024-11-20 05:32:23.127978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:51.339 BaseBdev2 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.339 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.597 BaseBdev3_malloc 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.597 [2024-11-20 05:32:23.183491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:51.597 [2024-11-20 05:32:23.183552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.597 [2024-11-20 05:32:23.183574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:51.597 [2024-11-20 05:32:23.183586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.597 [2024-11-20 05:32:23.185713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.597 [2024-11-20 05:32:23.185750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:51.597 BaseBdev3 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.597 BaseBdev4_malloc 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.597 [2024-11-20 05:32:23.227440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:51.597 [2024-11-20 05:32:23.227490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.597 [2024-11-20 05:32:23.227507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:51.597 [2024-11-20 05:32:23.227518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.597 [2024-11-20 05:32:23.229611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.597 [2024-11-20 05:32:23.229646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:51.597 BaseBdev4 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.597 spare_malloc 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.597 spare_delay 00:21:51.597 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.598 [2024-11-20 05:32:23.275182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:51.598 [2024-11-20 05:32:23.275233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.598 [2024-11-20 05:32:23.275250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:51.598 [2024-11-20 05:32:23.275260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.598 [2024-11-20 05:32:23.277386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.598 [2024-11-20 05:32:23.277421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:51.598 spare 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.598 [2024-11-20 05:32:23.283229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:51.598 [2024-11-20 05:32:23.285045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:51.598 [2024-11-20 05:32:23.285115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:51.598 [2024-11-20 05:32:23.285166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:51.598 [2024-11-20 05:32:23.285250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:51.598 [2024-11-20 05:32:23.285270] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:51.598 [2024-11-20 05:32:23.285538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:51.598 [2024-11-20 05:32:23.285694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:51.598 [2024-11-20 05:32:23.285709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:51.598 [2024-11-20 05:32:23.285855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.598 "name": "raid_bdev1", 00:21:51.598 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:51.598 "strip_size_kb": 0, 00:21:51.598 "state": "online", 00:21:51.598 "raid_level": "raid1", 00:21:51.598 "superblock": false, 00:21:51.598 "num_base_bdevs": 4, 00:21:51.598 "num_base_bdevs_discovered": 4, 00:21:51.598 "num_base_bdevs_operational": 4, 00:21:51.598 "base_bdevs_list": [ 00:21:51.598 { 00:21:51.598 "name": "BaseBdev1", 00:21:51.598 "uuid": "80adf0b6-3eb3-5704-b665-16475132471f", 00:21:51.598 "is_configured": true, 00:21:51.598 "data_offset": 0, 00:21:51.598 "data_size": 65536 00:21:51.598 }, 00:21:51.598 { 00:21:51.598 "name": "BaseBdev2", 00:21:51.598 "uuid": "0d57f3b6-d904-5a28-af4b-0b89c5e2134c", 00:21:51.598 "is_configured": true, 00:21:51.598 "data_offset": 0, 00:21:51.598 "data_size": 65536 00:21:51.598 }, 00:21:51.598 { 00:21:51.598 "name": "BaseBdev3", 00:21:51.598 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:51.598 "is_configured": true, 00:21:51.598 "data_offset": 0, 00:21:51.598 "data_size": 65536 00:21:51.598 }, 00:21:51.598 { 00:21:51.598 "name": "BaseBdev4", 00:21:51.598 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:51.598 "is_configured": true, 00:21:51.598 "data_offset": 0, 00:21:51.598 "data_size": 65536 00:21:51.598 } 00:21:51.598 ] 00:21:51.598 }' 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.598 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.855 [2024-11-20 05:32:23.603649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.855 [2024-11-20 05:32:23.671279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.855 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.856 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.856 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.856 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.856 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.856 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.856 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.113 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.113 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.113 "name": "raid_bdev1", 00:21:52.113 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:52.113 "strip_size_kb": 0, 00:21:52.113 "state": "online", 00:21:52.113 "raid_level": "raid1", 00:21:52.113 "superblock": false, 00:21:52.113 "num_base_bdevs": 4, 00:21:52.113 "num_base_bdevs_discovered": 3, 00:21:52.113 "num_base_bdevs_operational": 3, 00:21:52.113 "base_bdevs_list": [ 00:21:52.113 { 00:21:52.113 "name": null, 00:21:52.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.113 "is_configured": false, 00:21:52.113 "data_offset": 0, 00:21:52.113 "data_size": 65536 00:21:52.113 }, 00:21:52.113 { 00:21:52.113 "name": "BaseBdev2", 00:21:52.113 "uuid": "0d57f3b6-d904-5a28-af4b-0b89c5e2134c", 00:21:52.113 "is_configured": true, 00:21:52.113 "data_offset": 0, 00:21:52.113 "data_size": 65536 00:21:52.113 }, 00:21:52.113 { 00:21:52.113 "name": "BaseBdev3", 00:21:52.113 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:52.113 "is_configured": true, 00:21:52.113 "data_offset": 0, 00:21:52.113 "data_size": 65536 00:21:52.113 }, 00:21:52.113 { 00:21:52.113 "name": "BaseBdev4", 00:21:52.113 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:52.113 "is_configured": true, 00:21:52.113 "data_offset": 0, 00:21:52.113 "data_size": 65536 00:21:52.113 } 00:21:52.113 ] 00:21:52.113 }' 00:21:52.113 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.113 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:52.113 [2024-11-20 05:32:23.760674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:52.113 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:52.113 Zero copy mechanism will not be used. 00:21:52.113 Running I/O for 60 seconds... 00:21:52.371 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.371 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.371 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:52.371 [2024-11-20 05:32:23.954311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.371 05:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.371 05:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:52.371 [2024-11-20 05:32:23.992623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:52.371 [2024-11-20 05:32:23.994562] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.371 [2024-11-20 05:32:24.096519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:52.371 [2024-11-20 05:32:24.096950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:52.629 [2024-11-20 05:32:24.216091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:52.629 [2024-11-20 05:32:24.216340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:52.629 [2024-11-20 05:32:24.443689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:52.629 [2024-11-20 05:32:24.444176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:52.961 [2024-11-20 05:32:24.647912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:53.221 134.00 IOPS, 402.00 MiB/s [2024-11-20T05:32:25.056Z] [2024-11-20 05:32:24.878147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:53.221 05:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.221 05:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.221 05:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.221 05:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.221 05:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.221 05:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.221 05:32:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.221 05:32:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.221 05:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.221 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.221 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.221 "name": "raid_bdev1", 00:21:53.221 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:53.221 "strip_size_kb": 0, 00:21:53.221 "state": "online", 00:21:53.221 "raid_level": "raid1", 00:21:53.221 "superblock": false, 00:21:53.221 "num_base_bdevs": 4, 00:21:53.221 "num_base_bdevs_discovered": 4, 00:21:53.221 "num_base_bdevs_operational": 4, 00:21:53.221 "process": { 00:21:53.221 "type": "rebuild", 00:21:53.221 "target": "spare", 00:21:53.221 "progress": { 00:21:53.221 "blocks": 14336, 00:21:53.221 "percent": 21 00:21:53.221 } 00:21:53.221 }, 00:21:53.221 "base_bdevs_list": [ 00:21:53.221 { 00:21:53.221 "name": "spare", 00:21:53.221 "uuid": "ecc663a1-ca20-5b09-aa06-bc4a32e211ab", 00:21:53.221 "is_configured": true, 00:21:53.221 "data_offset": 0, 00:21:53.221 "data_size": 65536 00:21:53.221 }, 00:21:53.221 { 00:21:53.221 "name": "BaseBdev2", 00:21:53.221 "uuid": "0d57f3b6-d904-5a28-af4b-0b89c5e2134c", 00:21:53.221 "is_configured": true, 00:21:53.221 "data_offset": 0, 00:21:53.221 "data_size": 65536 00:21:53.221 }, 00:21:53.221 { 00:21:53.221 "name": "BaseBdev3", 00:21:53.221 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:53.221 "is_configured": true, 00:21:53.221 "data_offset": 0, 00:21:53.221 "data_size": 65536 00:21:53.221 }, 00:21:53.221 { 00:21:53.221 "name": "BaseBdev4", 00:21:53.221 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:53.221 "is_configured": true, 00:21:53.221 "data_offset": 0, 00:21:53.221 "data_size": 65536 00:21:53.221 } 00:21:53.221 ] 00:21:53.221 }' 00:21:53.221 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.481 [2024-11-20 05:32:25.102737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.481 [2024-11-20 05:32:25.103489] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:53.481 [2024-11-20 05:32:25.104028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:53.481 [2024-11-20 05:32:25.205921] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:53.481 [2024-11-20 05:32:25.208881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.481 [2024-11-20 05:32:25.208918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.481 [2024-11-20 05:32:25.208928] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:53.481 [2024-11-20 05:32:25.227629] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.481 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.481 "name": "raid_bdev1", 00:21:53.481 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:53.481 "strip_size_kb": 0, 00:21:53.481 "state": "online", 00:21:53.481 "raid_level": "raid1", 00:21:53.481 "superblock": false, 00:21:53.481 "num_base_bdevs": 4, 00:21:53.481 "num_base_bdevs_discovered": 3, 00:21:53.481 "num_base_bdevs_operational": 3, 00:21:53.481 "base_bdevs_list": [ 00:21:53.481 { 00:21:53.481 "name": null, 00:21:53.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.482 "is_configured": false, 00:21:53.482 "data_offset": 0, 00:21:53.482 "data_size": 65536 00:21:53.482 }, 00:21:53.482 { 00:21:53.482 "name": "BaseBdev2", 00:21:53.482 "uuid": "0d57f3b6-d904-5a28-af4b-0b89c5e2134c", 00:21:53.482 "is_configured": true, 00:21:53.482 "data_offset": 0, 00:21:53.482 "data_size": 65536 00:21:53.482 }, 00:21:53.482 { 00:21:53.482 "name": "BaseBdev3", 00:21:53.482 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:53.482 "is_configured": true, 00:21:53.482 "data_offset": 0, 00:21:53.482 "data_size": 65536 00:21:53.482 }, 00:21:53.482 { 00:21:53.482 "name": "BaseBdev4", 00:21:53.482 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:53.482 "is_configured": true, 00:21:53.482 "data_offset": 0, 00:21:53.482 "data_size": 65536 00:21:53.482 } 00:21:53.482 ] 00:21:53.482 }' 00:21:53.482 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.482 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.741 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:53.741 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.741 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:53.741 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:53.741 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.741 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.741 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.741 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.741 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.000 "name": "raid_bdev1", 00:21:54.000 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:54.000 "strip_size_kb": 0, 00:21:54.000 "state": "online", 00:21:54.000 "raid_level": "raid1", 00:21:54.000 "superblock": false, 00:21:54.000 "num_base_bdevs": 4, 00:21:54.000 "num_base_bdevs_discovered": 3, 00:21:54.000 "num_base_bdevs_operational": 3, 00:21:54.000 "base_bdevs_list": [ 00:21:54.000 { 00:21:54.000 "name": null, 00:21:54.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.000 "is_configured": false, 00:21:54.000 "data_offset": 0, 00:21:54.000 "data_size": 65536 00:21:54.000 }, 00:21:54.000 { 00:21:54.000 "name": "BaseBdev2", 00:21:54.000 "uuid": "0d57f3b6-d904-5a28-af4b-0b89c5e2134c", 00:21:54.000 "is_configured": true, 00:21:54.000 "data_offset": 0, 00:21:54.000 "data_size": 65536 00:21:54.000 }, 00:21:54.000 { 00:21:54.000 "name": "BaseBdev3", 00:21:54.000 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:54.000 "is_configured": true, 00:21:54.000 "data_offset": 0, 00:21:54.000 "data_size": 65536 00:21:54.000 }, 00:21:54.000 { 00:21:54.000 "name": "BaseBdev4", 00:21:54.000 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:54.000 "is_configured": true, 00:21:54.000 "data_offset": 0, 00:21:54.000 "data_size": 65536 00:21:54.000 } 00:21:54.000 ] 00:21:54.000 }' 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:54.000 [2024-11-20 05:32:25.643773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.000 05:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:54.000 [2024-11-20 05:32:25.703935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:54.000 [2024-11-20 05:32:25.705638] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:54.000 151.50 IOPS, 454.50 MiB/s [2024-11-20T05:32:25.835Z] [2024-11-20 05:32:25.827881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:54.258 [2024-11-20 05:32:25.937625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:54.258 [2024-11-20 05:32:25.937865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:54.517 [2024-11-20 05:32:26.209173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:54.775 [2024-11-20 05:32:26.425890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.033 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.033 "name": "raid_bdev1", 00:21:55.033 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:55.033 "strip_size_kb": 0, 00:21:55.033 "state": "online", 00:21:55.034 "raid_level": "raid1", 00:21:55.034 "superblock": false, 00:21:55.034 "num_base_bdevs": 4, 00:21:55.034 "num_base_bdevs_discovered": 4, 00:21:55.034 "num_base_bdevs_operational": 4, 00:21:55.034 "process": { 00:21:55.034 "type": "rebuild", 00:21:55.034 "target": "spare", 00:21:55.034 "progress": { 00:21:55.034 "blocks": 12288, 00:21:55.034 "percent": 18 00:21:55.034 } 00:21:55.034 }, 00:21:55.034 "base_bdevs_list": [ 00:21:55.034 { 00:21:55.034 "name": "spare", 00:21:55.034 "uuid": "ecc663a1-ca20-5b09-aa06-bc4a32e211ab", 00:21:55.034 "is_configured": true, 00:21:55.034 "data_offset": 0, 00:21:55.034 "data_size": 65536 00:21:55.034 }, 00:21:55.034 { 00:21:55.034 "name": "BaseBdev2", 00:21:55.034 "uuid": "0d57f3b6-d904-5a28-af4b-0b89c5e2134c", 00:21:55.034 "is_configured": true, 00:21:55.034 "data_offset": 0, 00:21:55.034 "data_size": 65536 00:21:55.034 }, 00:21:55.034 { 00:21:55.034 "name": "BaseBdev3", 00:21:55.034 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:55.034 "is_configured": true, 00:21:55.034 "data_offset": 0, 00:21:55.034 "data_size": 65536 00:21:55.034 }, 00:21:55.034 { 00:21:55.034 "name": "BaseBdev4", 00:21:55.034 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:55.034 "is_configured": true, 00:21:55.034 "data_offset": 0, 00:21:55.034 "data_size": 65536 00:21:55.034 } 00:21:55.034 ] 00:21:55.034 }' 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.034 [2024-11-20 05:32:26.748747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.034 151.33 IOPS, 454.00 MiB/s [2024-11-20T05:32:26.869Z] 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.034 05:32:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.034 [2024-11-20 05:32:26.806457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:55.293 [2024-11-20 05:32:26.970443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:55.293 [2024-11-20 05:32:27.077485] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:21:55.293 [2024-11-20 05:32:27.077533] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:21:55.293 [2024-11-20 05:32:27.078139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:55.293 [2024-11-20 05:32:27.085567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.293 "name": "raid_bdev1", 00:21:55.293 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:55.293 "strip_size_kb": 0, 00:21:55.293 "state": "online", 00:21:55.293 "raid_level": "raid1", 00:21:55.293 "superblock": false, 00:21:55.293 "num_base_bdevs": 4, 00:21:55.293 "num_base_bdevs_discovered": 3, 00:21:55.293 "num_base_bdevs_operational": 3, 00:21:55.293 "process": { 00:21:55.293 "type": "rebuild", 00:21:55.293 "target": "spare", 00:21:55.293 "progress": { 00:21:55.293 "blocks": 16384, 00:21:55.293 "percent": 25 00:21:55.293 } 00:21:55.293 }, 00:21:55.293 "base_bdevs_list": [ 00:21:55.293 { 00:21:55.293 "name": "spare", 00:21:55.293 "uuid": "ecc663a1-ca20-5b09-aa06-bc4a32e211ab", 00:21:55.293 "is_configured": true, 00:21:55.293 "data_offset": 0, 00:21:55.293 "data_size": 65536 00:21:55.293 }, 00:21:55.293 { 00:21:55.293 "name": null, 00:21:55.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.293 "is_configured": false, 00:21:55.293 "data_offset": 0, 00:21:55.293 "data_size": 65536 00:21:55.293 }, 00:21:55.293 { 00:21:55.293 "name": "BaseBdev3", 00:21:55.293 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:55.293 "is_configured": true, 00:21:55.293 "data_offset": 0, 00:21:55.293 "data_size": 65536 00:21:55.293 }, 00:21:55.293 { 00:21:55.293 "name": "BaseBdev4", 00:21:55.293 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:55.293 "is_configured": true, 00:21:55.293 "data_offset": 0, 00:21:55.293 "data_size": 65536 00:21:55.293 } 00:21:55.293 ] 00:21:55.293 }' 00:21:55.293 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.551 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=382 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.552 "name": "raid_bdev1", 00:21:55.552 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:55.552 "strip_size_kb": 0, 00:21:55.552 "state": "online", 00:21:55.552 "raid_level": "raid1", 00:21:55.552 "superblock": false, 00:21:55.552 "num_base_bdevs": 4, 00:21:55.552 "num_base_bdevs_discovered": 3, 00:21:55.552 "num_base_bdevs_operational": 3, 00:21:55.552 "process": { 00:21:55.552 "type": "rebuild", 00:21:55.552 "target": "spare", 00:21:55.552 "progress": { 00:21:55.552 "blocks": 16384, 00:21:55.552 "percent": 25 00:21:55.552 } 00:21:55.552 }, 00:21:55.552 "base_bdevs_list": [ 00:21:55.552 { 00:21:55.552 "name": "spare", 00:21:55.552 "uuid": "ecc663a1-ca20-5b09-aa06-bc4a32e211ab", 00:21:55.552 "is_configured": true, 00:21:55.552 "data_offset": 0, 00:21:55.552 "data_size": 65536 00:21:55.552 }, 00:21:55.552 { 00:21:55.552 "name": null, 00:21:55.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.552 "is_configured": false, 00:21:55.552 "data_offset": 0, 00:21:55.552 "data_size": 65536 00:21:55.552 }, 00:21:55.552 { 00:21:55.552 "name": "BaseBdev3", 00:21:55.552 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:55.552 "is_configured": true, 00:21:55.552 "data_offset": 0, 00:21:55.552 "data_size": 65536 00:21:55.552 }, 00:21:55.552 { 00:21:55.552 "name": "BaseBdev4", 00:21:55.552 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:55.552 "is_configured": true, 00:21:55.552 "data_offset": 0, 00:21:55.552 "data_size": 65536 00:21:55.552 } 00:21:55.552 ] 00:21:55.552 }' 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.552 05:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:55.810 [2024-11-20 05:32:27.549922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:56.068 [2024-11-20 05:32:27.768686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:56.068 [2024-11-20 05:32:27.769107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:56.068 130.50 IOPS, 391.50 MiB/s [2024-11-20T05:32:27.903Z] [2024-11-20 05:32:27.878265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:56.633 [2024-11-20 05:32:28.193255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.633 "name": "raid_bdev1", 00:21:56.633 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:56.633 "strip_size_kb": 0, 00:21:56.633 "state": "online", 00:21:56.633 "raid_level": "raid1", 00:21:56.633 "superblock": false, 00:21:56.633 "num_base_bdevs": 4, 00:21:56.633 "num_base_bdevs_discovered": 3, 00:21:56.633 "num_base_bdevs_operational": 3, 00:21:56.633 "process": { 00:21:56.633 "type": "rebuild", 00:21:56.633 "target": "spare", 00:21:56.633 "progress": { 00:21:56.633 "blocks": 32768, 00:21:56.633 "percent": 50 00:21:56.633 } 00:21:56.633 }, 00:21:56.633 "base_bdevs_list": [ 00:21:56.633 { 00:21:56.633 "name": "spare", 00:21:56.633 "uuid": "ecc663a1-ca20-5b09-aa06-bc4a32e211ab", 00:21:56.633 "is_configured": true, 00:21:56.633 "data_offset": 0, 00:21:56.633 "data_size": 65536 00:21:56.633 }, 00:21:56.633 { 00:21:56.633 "name": null, 00:21:56.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.633 "is_configured": false, 00:21:56.633 "data_offset": 0, 00:21:56.633 "data_size": 65536 00:21:56.633 }, 00:21:56.633 { 00:21:56.633 "name": "BaseBdev3", 00:21:56.633 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:56.633 "is_configured": true, 00:21:56.633 "data_offset": 0, 00:21:56.633 "data_size": 65536 00:21:56.633 }, 00:21:56.633 { 00:21:56.633 "name": "BaseBdev4", 00:21:56.633 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:56.633 "is_configured": true, 00:21:56.633 "data_offset": 0, 00:21:56.633 "data_size": 65536 00:21:56.633 } 00:21:56.633 ] 00:21:56.633 }' 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.633 05:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:56.633 [2024-11-20 05:32:28.413285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:56.890 [2024-11-20 05:32:28.635926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:57.712 116.00 IOPS, 348.00 MiB/s [2024-11-20T05:32:29.547Z] 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:57.712 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.712 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.712 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.713 "name": "raid_bdev1", 00:21:57.713 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:57.713 "strip_size_kb": 0, 00:21:57.713 "state": "online", 00:21:57.713 "raid_level": "raid1", 00:21:57.713 "superblock": false, 00:21:57.713 "num_base_bdevs": 4, 00:21:57.713 "num_base_bdevs_discovered": 3, 00:21:57.713 "num_base_bdevs_operational": 3, 00:21:57.713 "process": { 00:21:57.713 "type": "rebuild", 00:21:57.713 "target": "spare", 00:21:57.713 "progress": { 00:21:57.713 "blocks": 53248, 00:21:57.713 "percent": 81 00:21:57.713 } 00:21:57.713 }, 00:21:57.713 "base_bdevs_list": [ 00:21:57.713 { 00:21:57.713 "name": "spare", 00:21:57.713 "uuid": "ecc663a1-ca20-5b09-aa06-bc4a32e211ab", 00:21:57.713 "is_configured": true, 00:21:57.713 "data_offset": 0, 00:21:57.713 "data_size": 65536 00:21:57.713 }, 00:21:57.713 { 00:21:57.713 "name": null, 00:21:57.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.713 "is_configured": false, 00:21:57.713 "data_offset": 0, 00:21:57.713 "data_size": 65536 00:21:57.713 }, 00:21:57.713 { 00:21:57.713 "name": "BaseBdev3", 00:21:57.713 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:57.713 "is_configured": true, 00:21:57.713 "data_offset": 0, 00:21:57.713 "data_size": 65536 00:21:57.713 }, 00:21:57.713 { 00:21:57.713 "name": "BaseBdev4", 00:21:57.713 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:57.713 "is_configured": true, 00:21:57.713 "data_offset": 0, 00:21:57.713 "data_size": 65536 00:21:57.713 } 00:21:57.713 ] 00:21:57.713 }' 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.713 05:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:57.970 [2024-11-20 05:32:29.613224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:58.227 102.83 IOPS, 308.50 MiB/s [2024-11-20T05:32:30.062Z] [2024-11-20 05:32:30.047536] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:58.485 [2024-11-20 05:32:30.147535] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:58.485 [2024-11-20 05:32:30.149257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.743 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:58.743 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.743 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.743 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:58.743 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:58.743 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.743 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.744 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.744 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:58.744 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.744 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.744 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.744 "name": "raid_bdev1", 00:21:58.744 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:58.744 "strip_size_kb": 0, 00:21:58.744 "state": "online", 00:21:58.744 "raid_level": "raid1", 00:21:58.744 "superblock": false, 00:21:58.744 "num_base_bdevs": 4, 00:21:58.744 "num_base_bdevs_discovered": 3, 00:21:58.744 "num_base_bdevs_operational": 3, 00:21:58.744 "base_bdevs_list": [ 00:21:58.744 { 00:21:58.744 "name": "spare", 00:21:58.744 "uuid": "ecc663a1-ca20-5b09-aa06-bc4a32e211ab", 00:21:58.744 "is_configured": true, 00:21:58.744 "data_offset": 0, 00:21:58.744 "data_size": 65536 00:21:58.744 }, 00:21:58.744 { 00:21:58.744 "name": null, 00:21:58.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.744 "is_configured": false, 00:21:58.744 "data_offset": 0, 00:21:58.744 "data_size": 65536 00:21:58.744 }, 00:21:58.744 { 00:21:58.744 "name": "BaseBdev3", 00:21:58.744 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:58.744 "is_configured": true, 00:21:58.744 "data_offset": 0, 00:21:58.744 "data_size": 65536 00:21:58.744 }, 00:21:58.744 { 00:21:58.744 "name": "BaseBdev4", 00:21:58.744 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:58.744 "is_configured": true, 00:21:58.744 "data_offset": 0, 00:21:58.744 "data_size": 65536 00:21:58.744 } 00:21:58.744 ] 00:21:58.744 }' 00:21:58.744 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.744 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:58.744 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.002 "name": "raid_bdev1", 00:21:59.002 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:59.002 "strip_size_kb": 0, 00:21:59.002 "state": "online", 00:21:59.002 "raid_level": "raid1", 00:21:59.002 "superblock": false, 00:21:59.002 "num_base_bdevs": 4, 00:21:59.002 "num_base_bdevs_discovered": 3, 00:21:59.002 "num_base_bdevs_operational": 3, 00:21:59.002 "base_bdevs_list": [ 00:21:59.002 { 00:21:59.002 "name": "spare", 00:21:59.002 "uuid": "ecc663a1-ca20-5b09-aa06-bc4a32e211ab", 00:21:59.002 "is_configured": true, 00:21:59.002 "data_offset": 0, 00:21:59.002 "data_size": 65536 00:21:59.002 }, 00:21:59.002 { 00:21:59.002 "name": null, 00:21:59.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.002 "is_configured": false, 00:21:59.002 "data_offset": 0, 00:21:59.002 "data_size": 65536 00:21:59.002 }, 00:21:59.002 { 00:21:59.002 "name": "BaseBdev3", 00:21:59.002 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:59.002 "is_configured": true, 00:21:59.002 "data_offset": 0, 00:21:59.002 "data_size": 65536 00:21:59.002 }, 00:21:59.002 { 00:21:59.002 "name": "BaseBdev4", 00:21:59.002 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:59.002 "is_configured": true, 00:21:59.002 "data_offset": 0, 00:21:59.002 "data_size": 65536 00:21:59.002 } 00:21:59.002 ] 00:21:59.002 }' 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.002 "name": "raid_bdev1", 00:21:59.002 "uuid": "cc085ca5-0747-4ab0-8688-65132bf605ac", 00:21:59.002 "strip_size_kb": 0, 00:21:59.002 "state": "online", 00:21:59.002 "raid_level": "raid1", 00:21:59.002 "superblock": false, 00:21:59.002 "num_base_bdevs": 4, 00:21:59.002 "num_base_bdevs_discovered": 3, 00:21:59.002 "num_base_bdevs_operational": 3, 00:21:59.002 "base_bdevs_list": [ 00:21:59.002 { 00:21:59.002 "name": "spare", 00:21:59.002 "uuid": "ecc663a1-ca20-5b09-aa06-bc4a32e211ab", 00:21:59.002 "is_configured": true, 00:21:59.002 "data_offset": 0, 00:21:59.002 "data_size": 65536 00:21:59.002 }, 00:21:59.002 { 00:21:59.002 "name": null, 00:21:59.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.002 "is_configured": false, 00:21:59.002 "data_offset": 0, 00:21:59.002 "data_size": 65536 00:21:59.002 }, 00:21:59.002 { 00:21:59.002 "name": "BaseBdev3", 00:21:59.002 "uuid": "598b9add-0f6b-50f2-93be-7d495b0d3737", 00:21:59.002 "is_configured": true, 00:21:59.002 "data_offset": 0, 00:21:59.002 "data_size": 65536 00:21:59.002 }, 00:21:59.002 { 00:21:59.002 "name": "BaseBdev4", 00:21:59.002 "uuid": "2c4f390e-56e0-52c8-905e-c6c35e92d556", 00:21:59.002 "is_configured": true, 00:21:59.002 "data_offset": 0, 00:21:59.002 "data_size": 65536 00:21:59.002 } 00:21:59.002 ] 00:21:59.002 }' 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.002 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:59.260 92.86 IOPS, 278.57 MiB/s [2024-11-20T05:32:31.095Z] 05:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:59.260 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.260 05:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:59.260 [2024-11-20 05:32:31.000349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.260 [2024-11-20 05:32:31.000393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.260 00:21:59.260 Latency(us) 00:21:59.260 [2024-11-20T05:32:31.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.260 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:59.260 raid_bdev1 : 7.26 90.94 272.82 0.00 0.00 14509.89 281.99 112923.57 00:21:59.260 [2024-11-20T05:32:31.095Z] =================================================================================================================== 00:21:59.260 [2024-11-20T05:32:31.095Z] Total : 90.94 272.82 0.00 0.00 14509.89 281.99 112923.57 00:21:59.260 [2024-11-20 05:32:31.033007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.260 [2024-11-20 05:32:31.033053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.260 [2024-11-20 05:32:31.033147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.260 [2024-11-20 05:32:31.033156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:59.260 { 00:21:59.260 "results": [ 00:21:59.260 { 00:21:59.260 "job": "raid_bdev1", 00:21:59.260 "core_mask": "0x1", 00:21:59.260 "workload": "randrw", 00:21:59.260 "percentage": 50, 00:21:59.260 "status": "finished", 00:21:59.260 "queue_depth": 2, 00:21:59.260 "io_size": 3145728, 00:21:59.260 "runtime": 7.257568, 00:21:59.260 "iops": 90.93955440720639, 00:21:59.260 "mibps": 272.81866322161915, 00:21:59.260 "io_failed": 0, 00:21:59.260 "io_timeout": 0, 00:21:59.260 "avg_latency_us": 14509.890237762238, 00:21:59.260 "min_latency_us": 281.99384615384616, 00:21:59.260 "max_latency_us": 112923.56923076924 00:21:59.260 } 00:21:59.260 ], 00:21:59.260 "core_count": 1 00:21:59.260 } 00:21:59.260 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.260 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.260 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.260 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:59.260 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:59.261 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:59.521 /dev/nbd0 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.521 1+0 records in 00:21:59.521 1+0 records out 00:21:59.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255981 s, 16.0 MB/s 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:59.521 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:59.779 /dev/nbd1 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.779 1+0 records in 00:21:59.779 1+0 records out 00:21:59.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278988 s, 14.7 MB/s 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:59.779 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:00.036 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:22:00.036 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:00.036 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:00.036 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:00.036 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:00.036 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:00.036 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:00.293 05:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:00.293 /dev/nbd1 00:22:00.551 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.552 1+0 records in 00:22:00.552 1+0 records out 00:22:00.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316853 s, 12.9 MB/s 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:00.552 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:00.809 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76654 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76654 ']' 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76654 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76654 00:22:01.067 killing process with pid 76654 00:22:01.067 Received shutdown signal, test time was about 8.924985 seconds 00:22:01.067 00:22:01.067 Latency(us) 00:22:01.067 [2024-11-20T05:32:32.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.067 [2024-11-20T05:32:32.902Z] =================================================================================================================== 00:22:01.067 [2024-11-20T05:32:32.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76654' 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76654 00:22:01.067 [2024-11-20 05:32:32.687717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:01.067 05:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76654 00:22:01.067 [2024-11-20 05:32:32.897645] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.999 ************************************ 00:22:02.000 END TEST raid_rebuild_test_io 00:22:02.000 ************************************ 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:22:02.000 00:22:02.000 real 0m11.373s 00:22:02.000 user 0m14.138s 00:22:02.000 sys 0m1.300s 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.000 05:32:33 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:22:02.000 05:32:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:02.000 05:32:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:02.000 05:32:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:02.000 ************************************ 00:22:02.000 START TEST raid_rebuild_test_sb_io 00:22:02.000 ************************************ 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77052 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77052 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77052 ']' 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.000 05:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:02.000 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:02.000 Zero copy mechanism will not be used. 00:22:02.000 [2024-11-20 05:32:33.605975] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:22:02.000 [2024-11-20 05:32:33.606100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77052 ] 00:22:02.000 [2024-11-20 05:32:33.753162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.258 [2024-11-20 05:32:33.839471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.258 [2024-11-20 05:32:33.951721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.258 [2024-11-20 05:32:33.951773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 BaseBdev1_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 [2024-11-20 05:32:34.446425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:02.828 [2024-11-20 05:32:34.446485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.828 [2024-11-20 05:32:34.446504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:02.828 [2024-11-20 05:32:34.446513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.828 [2024-11-20 05:32:34.448272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.828 [2024-11-20 05:32:34.448307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:02.828 BaseBdev1 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 BaseBdev2_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 [2024-11-20 05:32:34.478039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:02.828 [2024-11-20 05:32:34.478090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.828 [2024-11-20 05:32:34.478106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:02.828 [2024-11-20 05:32:34.478114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.828 [2024-11-20 05:32:34.479856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.828 [2024-11-20 05:32:34.479985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:02.828 BaseBdev2 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 BaseBdev3_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 [2024-11-20 05:32:34.522616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:02.828 [2024-11-20 05:32:34.522667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.828 [2024-11-20 05:32:34.522687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:02.828 [2024-11-20 05:32:34.522696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.828 [2024-11-20 05:32:34.524511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.828 [2024-11-20 05:32:34.524542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:02.828 BaseBdev3 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 BaseBdev4_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 [2024-11-20 05:32:34.554574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:02.828 [2024-11-20 05:32:34.554615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.828 [2024-11-20 05:32:34.554628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:02.828 [2024-11-20 05:32:34.554637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.828 [2024-11-20 05:32:34.556334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.828 [2024-11-20 05:32:34.556372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:02.828 BaseBdev4 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 spare_malloc 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 spare_delay 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 [2024-11-20 05:32:34.594153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:02.828 [2024-11-20 05:32:34.594200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.828 [2024-11-20 05:32:34.594215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:02.828 [2024-11-20 05:32:34.594224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.828 [2024-11-20 05:32:34.595972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.828 [2024-11-20 05:32:34.596004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:02.828 spare 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.828 [2024-11-20 05:32:34.602194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.828 [2024-11-20 05:32:34.603687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.828 [2024-11-20 05:32:34.603854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.828 [2024-11-20 05:32:34.603902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:02.828 [2024-11-20 05:32:34.604052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:02.828 [2024-11-20 05:32:34.604062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:02.828 [2024-11-20 05:32:34.604273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:02.828 [2024-11-20 05:32:34.604416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:02.828 [2024-11-20 05:32:34.604424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:02.828 [2024-11-20 05:32:34.604539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.828 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.829 "name": "raid_bdev1", 00:22:02.829 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:02.829 "strip_size_kb": 0, 00:22:02.829 "state": "online", 00:22:02.829 "raid_level": "raid1", 00:22:02.829 "superblock": true, 00:22:02.829 "num_base_bdevs": 4, 00:22:02.829 "num_base_bdevs_discovered": 4, 00:22:02.829 "num_base_bdevs_operational": 4, 00:22:02.829 "base_bdevs_list": [ 00:22:02.829 { 00:22:02.829 "name": "BaseBdev1", 00:22:02.829 "uuid": "0065f615-1fba-5b5e-9822-a25c13e646a6", 00:22:02.829 "is_configured": true, 00:22:02.829 "data_offset": 2048, 00:22:02.829 "data_size": 63488 00:22:02.829 }, 00:22:02.829 { 00:22:02.829 "name": "BaseBdev2", 00:22:02.829 "uuid": "0b23d294-1bb1-5101-9f80-f5c693c5f912", 00:22:02.829 "is_configured": true, 00:22:02.829 "data_offset": 2048, 00:22:02.829 "data_size": 63488 00:22:02.829 }, 00:22:02.829 { 00:22:02.829 "name": "BaseBdev3", 00:22:02.829 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:02.829 "is_configured": true, 00:22:02.829 "data_offset": 2048, 00:22:02.829 "data_size": 63488 00:22:02.829 }, 00:22:02.829 { 00:22:02.829 "name": "BaseBdev4", 00:22:02.829 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:02.829 "is_configured": true, 00:22:02.829 "data_offset": 2048, 00:22:02.829 "data_size": 63488 00:22:02.829 } 00:22:02.829 ] 00:22:02.829 }' 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.829 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.392 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:03.393 [2024-11-20 05:32:34.942557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:03.393 [2024-11-20 05:32:34.994251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.393 05:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.393 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.393 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.393 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.393 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.393 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.393 "name": "raid_bdev1", 00:22:03.393 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:03.393 "strip_size_kb": 0, 00:22:03.393 "state": "online", 00:22:03.393 "raid_level": "raid1", 00:22:03.393 "superblock": true, 00:22:03.393 "num_base_bdevs": 4, 00:22:03.393 "num_base_bdevs_discovered": 3, 00:22:03.393 "num_base_bdevs_operational": 3, 00:22:03.393 "base_bdevs_list": [ 00:22:03.393 { 00:22:03.393 "name": null, 00:22:03.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.393 "is_configured": false, 00:22:03.393 "data_offset": 0, 00:22:03.393 "data_size": 63488 00:22:03.393 }, 00:22:03.393 { 00:22:03.393 "name": "BaseBdev2", 00:22:03.393 "uuid": "0b23d294-1bb1-5101-9f80-f5c693c5f912", 00:22:03.393 "is_configured": true, 00:22:03.393 "data_offset": 2048, 00:22:03.393 "data_size": 63488 00:22:03.393 }, 00:22:03.393 { 00:22:03.393 "name": "BaseBdev3", 00:22:03.393 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:03.393 "is_configured": true, 00:22:03.393 "data_offset": 2048, 00:22:03.393 "data_size": 63488 00:22:03.393 }, 00:22:03.393 { 00:22:03.393 "name": "BaseBdev4", 00:22:03.393 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:03.393 "is_configured": true, 00:22:03.393 "data_offset": 2048, 00:22:03.393 "data_size": 63488 00:22:03.393 } 00:22:03.393 ] 00:22:03.393 }' 00:22:03.393 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.393 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.393 [2024-11-20 05:32:35.086800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:03.393 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:03.393 Zero copy mechanism will not be used. 00:22:03.393 Running I/O for 60 seconds... 00:22:03.650 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:03.650 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.650 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.650 [2024-11-20 05:32:35.325877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:03.650 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.650 05:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:03.650 [2024-11-20 05:32:35.370237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:22:03.650 [2024-11-20 05:32:35.371949] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:03.908 [2024-11-20 05:32:35.631785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:03.908 [2024-11-20 05:32:35.632468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:04.165 [2024-11-20 05:32:35.992638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:04.423 183.00 IOPS, 549.00 MiB/s [2024-11-20T05:32:36.258Z] [2024-11-20 05:32:36.138912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.681 "name": "raid_bdev1", 00:22:04.681 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:04.681 "strip_size_kb": 0, 00:22:04.681 "state": "online", 00:22:04.681 "raid_level": "raid1", 00:22:04.681 "superblock": true, 00:22:04.681 "num_base_bdevs": 4, 00:22:04.681 "num_base_bdevs_discovered": 4, 00:22:04.681 "num_base_bdevs_operational": 4, 00:22:04.681 "process": { 00:22:04.681 "type": "rebuild", 00:22:04.681 "target": "spare", 00:22:04.681 "progress": { 00:22:04.681 "blocks": 12288, 00:22:04.681 "percent": 19 00:22:04.681 } 00:22:04.681 }, 00:22:04.681 "base_bdevs_list": [ 00:22:04.681 { 00:22:04.681 "name": "spare", 00:22:04.681 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:04.681 "is_configured": true, 00:22:04.681 "data_offset": 2048, 00:22:04.681 "data_size": 63488 00:22:04.681 }, 00:22:04.681 { 00:22:04.681 "name": "BaseBdev2", 00:22:04.681 "uuid": "0b23d294-1bb1-5101-9f80-f5c693c5f912", 00:22:04.681 "is_configured": true, 00:22:04.681 "data_offset": 2048, 00:22:04.681 "data_size": 63488 00:22:04.681 }, 00:22:04.681 { 00:22:04.681 "name": "BaseBdev3", 00:22:04.681 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:04.681 "is_configured": true, 00:22:04.681 "data_offset": 2048, 00:22:04.681 "data_size": 63488 00:22:04.681 }, 00:22:04.681 { 00:22:04.681 "name": "BaseBdev4", 00:22:04.681 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:04.681 "is_configured": true, 00:22:04.681 "data_offset": 2048, 00:22:04.681 "data_size": 63488 00:22:04.681 } 00:22:04.681 ] 00:22:04.681 }' 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:04.681 [2024-11-20 05:32:36.457128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:04.681 [2024-11-20 05:32:36.476036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:04.681 [2024-11-20 05:32:36.476480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:04.681 [2024-11-20 05:32:36.477332] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:04.681 [2024-11-20 05:32:36.485198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.681 [2024-11-20 05:32:36.485237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:04.681 [2024-11-20 05:32:36.485249] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:04.681 [2024-11-20 05:32:36.499558] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.681 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.939 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.939 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.939 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.939 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:04.939 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.939 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.939 "name": "raid_bdev1", 00:22:04.939 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:04.939 "strip_size_kb": 0, 00:22:04.939 "state": "online", 00:22:04.939 "raid_level": "raid1", 00:22:04.939 "superblock": true, 00:22:04.939 "num_base_bdevs": 4, 00:22:04.939 "num_base_bdevs_discovered": 3, 00:22:04.939 "num_base_bdevs_operational": 3, 00:22:04.939 "base_bdevs_list": [ 00:22:04.939 { 00:22:04.939 "name": null, 00:22:04.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.939 "is_configured": false, 00:22:04.939 "data_offset": 0, 00:22:04.939 "data_size": 63488 00:22:04.939 }, 00:22:04.939 { 00:22:04.939 "name": "BaseBdev2", 00:22:04.939 "uuid": "0b23d294-1bb1-5101-9f80-f5c693c5f912", 00:22:04.939 "is_configured": true, 00:22:04.939 "data_offset": 2048, 00:22:04.939 "data_size": 63488 00:22:04.939 }, 00:22:04.939 { 00:22:04.939 "name": "BaseBdev3", 00:22:04.939 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:04.939 "is_configured": true, 00:22:04.939 "data_offset": 2048, 00:22:04.939 "data_size": 63488 00:22:04.939 }, 00:22:04.939 { 00:22:04.939 "name": "BaseBdev4", 00:22:04.939 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:04.939 "is_configured": true, 00:22:04.939 "data_offset": 2048, 00:22:04.939 "data_size": 63488 00:22:04.939 } 00:22:04.939 ] 00:22:04.939 }' 00:22:04.939 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.939 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.196 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:05.196 "name": "raid_bdev1", 00:22:05.196 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:05.196 "strip_size_kb": 0, 00:22:05.196 "state": "online", 00:22:05.196 "raid_level": "raid1", 00:22:05.196 "superblock": true, 00:22:05.196 "num_base_bdevs": 4, 00:22:05.196 "num_base_bdevs_discovered": 3, 00:22:05.196 "num_base_bdevs_operational": 3, 00:22:05.196 "base_bdevs_list": [ 00:22:05.196 { 00:22:05.196 "name": null, 00:22:05.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.196 "is_configured": false, 00:22:05.196 "data_offset": 0, 00:22:05.196 "data_size": 63488 00:22:05.196 }, 00:22:05.196 { 00:22:05.196 "name": "BaseBdev2", 00:22:05.196 "uuid": "0b23d294-1bb1-5101-9f80-f5c693c5f912", 00:22:05.196 "is_configured": true, 00:22:05.196 "data_offset": 2048, 00:22:05.196 "data_size": 63488 00:22:05.196 }, 00:22:05.196 { 00:22:05.196 "name": "BaseBdev3", 00:22:05.196 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:05.196 "is_configured": true, 00:22:05.196 "data_offset": 2048, 00:22:05.196 "data_size": 63488 00:22:05.196 }, 00:22:05.196 { 00:22:05.197 "name": "BaseBdev4", 00:22:05.197 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:05.197 "is_configured": true, 00:22:05.197 "data_offset": 2048, 00:22:05.197 "data_size": 63488 00:22:05.197 } 00:22:05.197 ] 00:22:05.197 }' 00:22:05.197 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.197 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:05.197 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.197 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:05.197 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:05.197 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.197 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:05.197 [2024-11-20 05:32:36.947281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:05.197 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.197 05:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:05.197 [2024-11-20 05:32:37.020691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:05.197 [2024-11-20 05:32:37.022406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:05.455 186.50 IOPS, 559.50 MiB/s [2024-11-20T05:32:37.290Z] [2024-11-20 05:32:37.129899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:05.455 [2024-11-20 05:32:37.130474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:05.455 [2024-11-20 05:32:37.262977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:05.455 [2024-11-20 05:32:37.268253] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:06.020 [2024-11-20 05:32:37.607502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:06.020 [2024-11-20 05:32:37.817996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:06.020 [2024-11-20 05:32:37.818223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:06.278 05:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.278 05:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.278 05:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:06.278 05:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:06.278 05:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.278 05:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.278 05:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.278 05:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.278 05:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.278 "name": "raid_bdev1", 00:22:06.278 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:06.278 "strip_size_kb": 0, 00:22:06.278 "state": "online", 00:22:06.278 "raid_level": "raid1", 00:22:06.278 "superblock": true, 00:22:06.278 "num_base_bdevs": 4, 00:22:06.278 "num_base_bdevs_discovered": 4, 00:22:06.278 "num_base_bdevs_operational": 4, 00:22:06.278 "process": { 00:22:06.278 "type": "rebuild", 00:22:06.278 "target": "spare", 00:22:06.278 "progress": { 00:22:06.278 "blocks": 12288, 00:22:06.278 "percent": 19 00:22:06.278 } 00:22:06.278 }, 00:22:06.278 "base_bdevs_list": [ 00:22:06.278 { 00:22:06.278 "name": "spare", 00:22:06.278 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:06.278 "is_configured": true, 00:22:06.278 "data_offset": 2048, 00:22:06.278 "data_size": 63488 00:22:06.278 }, 00:22:06.278 { 00:22:06.278 "name": "BaseBdev2", 00:22:06.278 "uuid": "0b23d294-1bb1-5101-9f80-f5c693c5f912", 00:22:06.278 "is_configured": true, 00:22:06.278 "data_offset": 2048, 00:22:06.278 "data_size": 63488 00:22:06.278 }, 00:22:06.278 { 00:22:06.278 "name": "BaseBdev3", 00:22:06.278 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:06.278 "is_configured": true, 00:22:06.278 "data_offset": 2048, 00:22:06.278 "data_size": 63488 00:22:06.278 }, 00:22:06.278 { 00:22:06.278 "name": "BaseBdev4", 00:22:06.278 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:06.278 "is_configured": true, 00:22:06.278 "data_offset": 2048, 00:22:06.278 "data_size": 63488 00:22:06.278 } 00:22:06.278 ] 00:22:06.278 }' 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:06.278 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.278 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:06.278 [2024-11-20 05:32:38.092679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:06.536 172.00 IOPS, 516.00 MiB/s [2024-11-20T05:32:38.371Z] [2024-11-20 05:32:38.171772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:06.536 [2024-11-20 05:32:38.172502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:06.794 [2024-11-20 05:32:38.385858] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:22:06.794 [2024-11-20 05:32:38.386023] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.794 "name": "raid_bdev1", 00:22:06.794 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:06.794 "strip_size_kb": 0, 00:22:06.794 "state": "online", 00:22:06.794 "raid_level": "raid1", 00:22:06.794 "superblock": true, 00:22:06.794 "num_base_bdevs": 4, 00:22:06.794 "num_base_bdevs_discovered": 3, 00:22:06.794 "num_base_bdevs_operational": 3, 00:22:06.794 "process": { 00:22:06.794 "type": "rebuild", 00:22:06.794 "target": "spare", 00:22:06.794 "progress": { 00:22:06.794 "blocks": 16384, 00:22:06.794 "percent": 25 00:22:06.794 } 00:22:06.794 }, 00:22:06.794 "base_bdevs_list": [ 00:22:06.794 { 00:22:06.794 "name": "spare", 00:22:06.794 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:06.794 "is_configured": true, 00:22:06.794 "data_offset": 2048, 00:22:06.794 "data_size": 63488 00:22:06.794 }, 00:22:06.794 { 00:22:06.794 "name": null, 00:22:06.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.794 "is_configured": false, 00:22:06.794 "data_offset": 0, 00:22:06.794 "data_size": 63488 00:22:06.794 }, 00:22:06.794 { 00:22:06.794 "name": "BaseBdev3", 00:22:06.794 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:06.794 "is_configured": true, 00:22:06.794 "data_offset": 2048, 00:22:06.794 "data_size": 63488 00:22:06.794 }, 00:22:06.794 { 00:22:06.794 "name": "BaseBdev4", 00:22:06.794 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:06.794 "is_configured": true, 00:22:06.794 "data_offset": 2048, 00:22:06.794 "data_size": 63488 00:22:06.794 } 00:22:06.794 ] 00:22:06.794 }' 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.794 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.794 "name": "raid_bdev1", 00:22:06.794 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:06.795 "strip_size_kb": 0, 00:22:06.795 "state": "online", 00:22:06.795 "raid_level": "raid1", 00:22:06.795 "superblock": true, 00:22:06.795 "num_base_bdevs": 4, 00:22:06.795 "num_base_bdevs_discovered": 3, 00:22:06.795 "num_base_bdevs_operational": 3, 00:22:06.795 "process": { 00:22:06.795 "type": "rebuild", 00:22:06.795 "target": "spare", 00:22:06.795 "progress": { 00:22:06.795 "blocks": 16384, 00:22:06.795 "percent": 25 00:22:06.795 } 00:22:06.795 }, 00:22:06.795 "base_bdevs_list": [ 00:22:06.795 { 00:22:06.795 "name": "spare", 00:22:06.795 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:06.795 "is_configured": true, 00:22:06.795 "data_offset": 2048, 00:22:06.795 "data_size": 63488 00:22:06.795 }, 00:22:06.795 { 00:22:06.795 "name": null, 00:22:06.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.795 "is_configured": false, 00:22:06.795 "data_offset": 0, 00:22:06.795 "data_size": 63488 00:22:06.795 }, 00:22:06.795 { 00:22:06.795 "name": "BaseBdev3", 00:22:06.795 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:06.795 "is_configured": true, 00:22:06.795 "data_offset": 2048, 00:22:06.795 "data_size": 63488 00:22:06.795 }, 00:22:06.795 { 00:22:06.795 "name": "BaseBdev4", 00:22:06.795 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:06.795 "is_configured": true, 00:22:06.795 "data_offset": 2048, 00:22:06.795 "data_size": 63488 00:22:06.795 } 00:22:06.795 ] 00:22:06.795 }' 00:22:06.795 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.795 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:06.795 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:06.795 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.795 05:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:07.052 [2024-11-20 05:32:38.643688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:07.052 [2024-11-20 05:32:38.865958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:07.310 [2024-11-20 05:32:39.090522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:07.572 147.25 IOPS, 441.75 MiB/s [2024-11-20T05:32:39.407Z] [2024-11-20 05:32:39.316491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:07.572 [2024-11-20 05:32:39.316882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:07.830 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.831 "name": "raid_bdev1", 00:22:07.831 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:07.831 "strip_size_kb": 0, 00:22:07.831 "state": "online", 00:22:07.831 "raid_level": "raid1", 00:22:07.831 "superblock": true, 00:22:07.831 "num_base_bdevs": 4, 00:22:07.831 "num_base_bdevs_discovered": 3, 00:22:07.831 "num_base_bdevs_operational": 3, 00:22:07.831 "process": { 00:22:07.831 "type": "rebuild", 00:22:07.831 "target": "spare", 00:22:07.831 "progress": { 00:22:07.831 "blocks": 30720, 00:22:07.831 "percent": 48 00:22:07.831 } 00:22:07.831 }, 00:22:07.831 "base_bdevs_list": [ 00:22:07.831 { 00:22:07.831 "name": "spare", 00:22:07.831 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:07.831 "is_configured": true, 00:22:07.831 "data_offset": 2048, 00:22:07.831 "data_size": 63488 00:22:07.831 }, 00:22:07.831 { 00:22:07.831 "name": null, 00:22:07.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.831 "is_configured": false, 00:22:07.831 "data_offset": 0, 00:22:07.831 "data_size": 63488 00:22:07.831 }, 00:22:07.831 { 00:22:07.831 "name": "BaseBdev3", 00:22:07.831 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:07.831 "is_configured": true, 00:22:07.831 "data_offset": 2048, 00:22:07.831 "data_size": 63488 00:22:07.831 }, 00:22:07.831 { 00:22:07.831 "name": "BaseBdev4", 00:22:07.831 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:07.831 "is_configured": true, 00:22:07.831 "data_offset": 2048, 00:22:07.831 "data_size": 63488 00:22:07.831 } 00:22:07.831 ] 00:22:07.831 }' 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.831 [2024-11-20 05:32:39.643096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:07.831 [2024-11-20 05:32:39.652640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.831 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:08.088 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.088 05:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:08.346 127.80 IOPS, 383.40 MiB/s [2024-11-20T05:32:40.181Z] [2024-11-20 05:32:40.132283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:08.604 [2024-11-20 05:32:40.367163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:08.861 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:08.861 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.861 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:08.861 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:08.861 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:08.861 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:09.119 "name": "raid_bdev1", 00:22:09.119 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:09.119 "strip_size_kb": 0, 00:22:09.119 "state": "online", 00:22:09.119 "raid_level": "raid1", 00:22:09.119 "superblock": true, 00:22:09.119 "num_base_bdevs": 4, 00:22:09.119 "num_base_bdevs_discovered": 3, 00:22:09.119 "num_base_bdevs_operational": 3, 00:22:09.119 "process": { 00:22:09.119 "type": "rebuild", 00:22:09.119 "target": "spare", 00:22:09.119 "progress": { 00:22:09.119 "blocks": 43008, 00:22:09.119 "percent": 67 00:22:09.119 } 00:22:09.119 }, 00:22:09.119 "base_bdevs_list": [ 00:22:09.119 { 00:22:09.119 "name": "spare", 00:22:09.119 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:09.119 "is_configured": true, 00:22:09.119 "data_offset": 2048, 00:22:09.119 "data_size": 63488 00:22:09.119 }, 00:22:09.119 { 00:22:09.119 "name": null, 00:22:09.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.119 "is_configured": false, 00:22:09.119 "data_offset": 0, 00:22:09.119 "data_size": 63488 00:22:09.119 }, 00:22:09.119 { 00:22:09.119 "name": "BaseBdev3", 00:22:09.119 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:09.119 "is_configured": true, 00:22:09.119 "data_offset": 2048, 00:22:09.119 "data_size": 63488 00:22:09.119 }, 00:22:09.119 { 00:22:09.119 "name": "BaseBdev4", 00:22:09.119 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:09.119 "is_configured": true, 00:22:09.119 "data_offset": 2048, 00:22:09.119 "data_size": 63488 00:22:09.119 } 00:22:09.119 ] 00:22:09.119 }' 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.119 05:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:09.394 [2024-11-20 05:32:41.044051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:09.685 111.67 IOPS, 335.00 MiB/s [2024-11-20T05:32:41.520Z] [2024-11-20 05:32:41.263826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:10.251 "name": "raid_bdev1", 00:22:10.251 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:10.251 "strip_size_kb": 0, 00:22:10.251 "state": "online", 00:22:10.251 "raid_level": "raid1", 00:22:10.251 "superblock": true, 00:22:10.251 "num_base_bdevs": 4, 00:22:10.251 "num_base_bdevs_discovered": 3, 00:22:10.251 "num_base_bdevs_operational": 3, 00:22:10.251 "process": { 00:22:10.251 "type": "rebuild", 00:22:10.251 "target": "spare", 00:22:10.251 "progress": { 00:22:10.251 "blocks": 59392, 00:22:10.251 "percent": 93 00:22:10.251 } 00:22:10.251 }, 00:22:10.251 "base_bdevs_list": [ 00:22:10.251 { 00:22:10.251 "name": "spare", 00:22:10.251 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:10.251 "is_configured": true, 00:22:10.251 "data_offset": 2048, 00:22:10.251 "data_size": 63488 00:22:10.251 }, 00:22:10.251 { 00:22:10.251 "name": null, 00:22:10.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.251 "is_configured": false, 00:22:10.251 "data_offset": 0, 00:22:10.251 "data_size": 63488 00:22:10.251 }, 00:22:10.251 { 00:22:10.251 "name": "BaseBdev3", 00:22:10.251 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:10.251 "is_configured": true, 00:22:10.251 "data_offset": 2048, 00:22:10.251 "data_size": 63488 00:22:10.251 }, 00:22:10.251 { 00:22:10.251 "name": "BaseBdev4", 00:22:10.251 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:10.251 "is_configured": true, 00:22:10.251 "data_offset": 2048, 00:22:10.251 "data_size": 63488 00:22:10.251 } 00:22:10.251 ] 00:22:10.251 }' 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.251 05:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:10.251 [2024-11-20 05:32:41.937021] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:10.251 [2024-11-20 05:32:42.036991] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:10.251 [2024-11-20 05:32:42.039785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.444 100.86 IOPS, 302.57 MiB/s [2024-11-20T05:32:43.279Z] 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.444 "name": "raid_bdev1", 00:22:11.444 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:11.444 "strip_size_kb": 0, 00:22:11.444 "state": "online", 00:22:11.444 "raid_level": "raid1", 00:22:11.444 "superblock": true, 00:22:11.444 "num_base_bdevs": 4, 00:22:11.444 "num_base_bdevs_discovered": 3, 00:22:11.444 "num_base_bdevs_operational": 3, 00:22:11.444 "base_bdevs_list": [ 00:22:11.444 { 00:22:11.444 "name": "spare", 00:22:11.444 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:11.444 "is_configured": true, 00:22:11.444 "data_offset": 2048, 00:22:11.444 "data_size": 63488 00:22:11.444 }, 00:22:11.444 { 00:22:11.444 "name": null, 00:22:11.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.444 "is_configured": false, 00:22:11.444 "data_offset": 0, 00:22:11.444 "data_size": 63488 00:22:11.444 }, 00:22:11.444 { 00:22:11.444 "name": "BaseBdev3", 00:22:11.444 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:11.444 "is_configured": true, 00:22:11.444 "data_offset": 2048, 00:22:11.444 "data_size": 63488 00:22:11.444 }, 00:22:11.444 { 00:22:11.444 "name": "BaseBdev4", 00:22:11.444 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:11.444 "is_configured": true, 00:22:11.444 "data_offset": 2048, 00:22:11.444 "data_size": 63488 00:22:11.444 } 00:22:11.444 ] 00:22:11.444 }' 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:11.444 05:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.444 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.444 "name": "raid_bdev1", 00:22:11.444 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:11.444 "strip_size_kb": 0, 00:22:11.444 "state": "online", 00:22:11.444 "raid_level": "raid1", 00:22:11.444 "superblock": true, 00:22:11.444 "num_base_bdevs": 4, 00:22:11.444 "num_base_bdevs_discovered": 3, 00:22:11.444 "num_base_bdevs_operational": 3, 00:22:11.444 "base_bdevs_list": [ 00:22:11.444 { 00:22:11.444 "name": "spare", 00:22:11.444 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:11.444 "is_configured": true, 00:22:11.444 "data_offset": 2048, 00:22:11.444 "data_size": 63488 00:22:11.444 }, 00:22:11.444 { 00:22:11.444 "name": null, 00:22:11.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.444 "is_configured": false, 00:22:11.444 "data_offset": 0, 00:22:11.444 "data_size": 63488 00:22:11.445 }, 00:22:11.445 { 00:22:11.445 "name": "BaseBdev3", 00:22:11.445 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:11.445 "is_configured": true, 00:22:11.445 "data_offset": 2048, 00:22:11.445 "data_size": 63488 00:22:11.445 }, 00:22:11.445 { 00:22:11.445 "name": "BaseBdev4", 00:22:11.445 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:11.445 "is_configured": true, 00:22:11.445 "data_offset": 2048, 00:22:11.445 "data_size": 63488 00:22:11.445 } 00:22:11.445 ] 00:22:11.445 }' 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.445 91.88 IOPS, 275.62 MiB/s [2024-11-20T05:32:43.280Z] 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.445 "name": "raid_bdev1", 00:22:11.445 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:11.445 "strip_size_kb": 0, 00:22:11.445 "state": "online", 00:22:11.445 "raid_level": "raid1", 00:22:11.445 "superblock": true, 00:22:11.445 "num_base_bdevs": 4, 00:22:11.445 "num_base_bdevs_discovered": 3, 00:22:11.445 "num_base_bdevs_operational": 3, 00:22:11.445 "base_bdevs_list": [ 00:22:11.445 { 00:22:11.445 "name": "spare", 00:22:11.445 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:11.445 "is_configured": true, 00:22:11.445 "data_offset": 2048, 00:22:11.445 "data_size": 63488 00:22:11.445 }, 00:22:11.445 { 00:22:11.445 "name": null, 00:22:11.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.445 "is_configured": false, 00:22:11.445 "data_offset": 0, 00:22:11.445 "data_size": 63488 00:22:11.445 }, 00:22:11.445 { 00:22:11.445 "name": "BaseBdev3", 00:22:11.445 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:11.445 "is_configured": true, 00:22:11.445 "data_offset": 2048, 00:22:11.445 "data_size": 63488 00:22:11.445 }, 00:22:11.445 { 00:22:11.445 "name": "BaseBdev4", 00:22:11.445 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:11.445 "is_configured": true, 00:22:11.445 "data_offset": 2048, 00:22:11.445 "data_size": 63488 00:22:11.445 } 00:22:11.445 ] 00:22:11.445 }' 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.445 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:11.704 [2024-11-20 05:32:43.440255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:11.704 [2024-11-20 05:32:43.440438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:11.704 00:22:11.704 Latency(us) 00:22:11.704 [2024-11-20T05:32:43.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.704 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:11.704 raid_bdev1 : 8.37 89.71 269.12 0.00 0.00 15379.43 313.50 116149.96 00:22:11.704 [2024-11-20T05:32:43.539Z] =================================================================================================================== 00:22:11.704 [2024-11-20T05:32:43.539Z] Total : 89.71 269.12 0.00 0.00 15379.43 313.50 116149.96 00:22:11.704 { 00:22:11.704 "results": [ 00:22:11.704 { 00:22:11.704 "job": "raid_bdev1", 00:22:11.704 "core_mask": "0x1", 00:22:11.704 "workload": "randrw", 00:22:11.704 "percentage": 50, 00:22:11.704 "status": "finished", 00:22:11.704 "queue_depth": 2, 00:22:11.704 "io_size": 3145728, 00:22:11.704 "runtime": 8.371575, 00:22:11.704 "iops": 89.70832848060252, 00:22:11.704 "mibps": 269.12498544180755, 00:22:11.704 "io_failed": 0, 00:22:11.704 "io_timeout": 0, 00:22:11.704 "avg_latency_us": 15379.433239782855, 00:22:11.704 "min_latency_us": 313.5015384615385, 00:22:11.704 "max_latency_us": 116149.95692307693 00:22:11.704 } 00:22:11.704 ], 00:22:11.704 "core_count": 1 00:22:11.704 } 00:22:11.704 [2024-11-20 05:32:43.475770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.704 [2024-11-20 05:32:43.475848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.704 [2024-11-20 05:32:43.475961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.704 [2024-11-20 05:32:43.475975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.704 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:22:11.963 /dev/nbd0 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:11.963 1+0 records in 00:22:11.963 1+0 records out 00:22:11.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226858 s, 18.1 MB/s 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.963 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:12.265 /dev/nbd1 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:12.265 05:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:12.265 1+0 records in 00:22:12.265 1+0 records out 00:22:12.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264935 s, 15.5 MB/s 00:22:12.265 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.265 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:22:12.265 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.265 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:12.265 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:22:12.265 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:12.266 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:12.266 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:12.548 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:22:12.548 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:12.548 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:12.548 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:12.548 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:12.548 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:12.548 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:12.805 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:12.806 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:12.806 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:12.806 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:12.806 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:12.806 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:13.063 /dev/nbd1 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:13.063 1+0 records in 00:22:13.063 1+0 records out 00:22:13.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303862 s, 13.5 MB/s 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:13.063 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:13.321 05:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.579 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:13.579 [2024-11-20 05:32:45.274125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:13.579 [2024-11-20 05:32:45.274195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.579 [2024-11-20 05:32:45.274217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:13.579 [2024-11-20 05:32:45.274230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.579 [2024-11-20 05:32:45.276513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.579 [2024-11-20 05:32:45.276556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:13.579 [2024-11-20 05:32:45.276650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:13.579 [2024-11-20 05:32:45.276701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:13.579 [2024-11-20 05:32:45.276832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:13.580 [2024-11-20 05:32:45.276936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:13.580 spare 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:13.580 [2024-11-20 05:32:45.377040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:13.580 [2024-11-20 05:32:45.377093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:13.580 [2024-11-20 05:32:45.377462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:22:13.580 [2024-11-20 05:32:45.377650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:13.580 [2024-11-20 05:32:45.377671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:13.580 [2024-11-20 05:32:45.377851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:13.580 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.838 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.838 "name": "raid_bdev1", 00:22:13.838 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:13.838 "strip_size_kb": 0, 00:22:13.838 "state": "online", 00:22:13.838 "raid_level": "raid1", 00:22:13.838 "superblock": true, 00:22:13.838 "num_base_bdevs": 4, 00:22:13.838 "num_base_bdevs_discovered": 3, 00:22:13.838 "num_base_bdevs_operational": 3, 00:22:13.838 "base_bdevs_list": [ 00:22:13.838 { 00:22:13.838 "name": "spare", 00:22:13.838 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:13.838 "is_configured": true, 00:22:13.838 "data_offset": 2048, 00:22:13.838 "data_size": 63488 00:22:13.838 }, 00:22:13.838 { 00:22:13.838 "name": null, 00:22:13.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.838 "is_configured": false, 00:22:13.838 "data_offset": 2048, 00:22:13.838 "data_size": 63488 00:22:13.838 }, 00:22:13.838 { 00:22:13.838 "name": "BaseBdev3", 00:22:13.838 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:13.838 "is_configured": true, 00:22:13.838 "data_offset": 2048, 00:22:13.838 "data_size": 63488 00:22:13.838 }, 00:22:13.838 { 00:22:13.838 "name": "BaseBdev4", 00:22:13.838 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:13.838 "is_configured": true, 00:22:13.838 "data_offset": 2048, 00:22:13.838 "data_size": 63488 00:22:13.838 } 00:22:13.838 ] 00:22:13.838 }' 00:22:13.838 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.838 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.096 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.096 "name": "raid_bdev1", 00:22:14.096 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:14.096 "strip_size_kb": 0, 00:22:14.096 "state": "online", 00:22:14.096 "raid_level": "raid1", 00:22:14.096 "superblock": true, 00:22:14.096 "num_base_bdevs": 4, 00:22:14.096 "num_base_bdevs_discovered": 3, 00:22:14.096 "num_base_bdevs_operational": 3, 00:22:14.096 "base_bdevs_list": [ 00:22:14.096 { 00:22:14.096 "name": "spare", 00:22:14.096 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:14.096 "is_configured": true, 00:22:14.096 "data_offset": 2048, 00:22:14.096 "data_size": 63488 00:22:14.096 }, 00:22:14.096 { 00:22:14.096 "name": null, 00:22:14.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.096 "is_configured": false, 00:22:14.096 "data_offset": 2048, 00:22:14.096 "data_size": 63488 00:22:14.096 }, 00:22:14.096 { 00:22:14.096 "name": "BaseBdev3", 00:22:14.096 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:14.096 "is_configured": true, 00:22:14.096 "data_offset": 2048, 00:22:14.096 "data_size": 63488 00:22:14.096 }, 00:22:14.096 { 00:22:14.096 "name": "BaseBdev4", 00:22:14.096 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:14.097 "is_configured": true, 00:22:14.097 "data_offset": 2048, 00:22:14.097 "data_size": 63488 00:22:14.097 } 00:22:14.097 ] 00:22:14.097 }' 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 [2024-11-20 05:32:45.846378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.097 "name": "raid_bdev1", 00:22:14.097 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:14.097 "strip_size_kb": 0, 00:22:14.097 "state": "online", 00:22:14.097 "raid_level": "raid1", 00:22:14.097 "superblock": true, 00:22:14.097 "num_base_bdevs": 4, 00:22:14.097 "num_base_bdevs_discovered": 2, 00:22:14.097 "num_base_bdevs_operational": 2, 00:22:14.097 "base_bdevs_list": [ 00:22:14.097 { 00:22:14.097 "name": null, 00:22:14.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.097 "is_configured": false, 00:22:14.097 "data_offset": 0, 00:22:14.097 "data_size": 63488 00:22:14.097 }, 00:22:14.097 { 00:22:14.097 "name": null, 00:22:14.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.097 "is_configured": false, 00:22:14.097 "data_offset": 2048, 00:22:14.097 "data_size": 63488 00:22:14.097 }, 00:22:14.097 { 00:22:14.097 "name": "BaseBdev3", 00:22:14.097 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:14.097 "is_configured": true, 00:22:14.097 "data_offset": 2048, 00:22:14.097 "data_size": 63488 00:22:14.097 }, 00:22:14.097 { 00:22:14.097 "name": "BaseBdev4", 00:22:14.097 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:14.097 "is_configured": true, 00:22:14.097 "data_offset": 2048, 00:22:14.097 "data_size": 63488 00:22:14.097 } 00:22:14.097 ] 00:22:14.097 }' 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.097 05:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:14.355 05:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:14.355 05:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.355 05:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:14.355 [2024-11-20 05:32:46.158511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:14.355 [2024-11-20 05:32:46.158694] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:22:14.355 [2024-11-20 05:32:46.158719] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:14.355 [2024-11-20 05:32:46.158760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:14.356 [2024-11-20 05:32:46.168485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:22:14.356 05:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.356 05:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:14.356 [2024-11-20 05:32:46.170456] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:15.730 "name": "raid_bdev1", 00:22:15.730 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:15.730 "strip_size_kb": 0, 00:22:15.730 "state": "online", 00:22:15.730 "raid_level": "raid1", 00:22:15.730 "superblock": true, 00:22:15.730 "num_base_bdevs": 4, 00:22:15.730 "num_base_bdevs_discovered": 3, 00:22:15.730 "num_base_bdevs_operational": 3, 00:22:15.730 "process": { 00:22:15.730 "type": "rebuild", 00:22:15.730 "target": "spare", 00:22:15.730 "progress": { 00:22:15.730 "blocks": 20480, 00:22:15.730 "percent": 32 00:22:15.730 } 00:22:15.730 }, 00:22:15.730 "base_bdevs_list": [ 00:22:15.730 { 00:22:15.730 "name": "spare", 00:22:15.730 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:15.730 "is_configured": true, 00:22:15.730 "data_offset": 2048, 00:22:15.730 "data_size": 63488 00:22:15.730 }, 00:22:15.730 { 00:22:15.730 "name": null, 00:22:15.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.730 "is_configured": false, 00:22:15.730 "data_offset": 2048, 00:22:15.730 "data_size": 63488 00:22:15.730 }, 00:22:15.730 { 00:22:15.730 "name": "BaseBdev3", 00:22:15.730 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:15.730 "is_configured": true, 00:22:15.730 "data_offset": 2048, 00:22:15.730 "data_size": 63488 00:22:15.730 }, 00:22:15.730 { 00:22:15.730 "name": "BaseBdev4", 00:22:15.730 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:15.730 "is_configured": true, 00:22:15.730 "data_offset": 2048, 00:22:15.730 "data_size": 63488 00:22:15.730 } 00:22:15.730 ] 00:22:15.730 }' 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:15.730 [2024-11-20 05:32:47.264331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:15.730 [2024-11-20 05:32:47.275981] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:15.730 [2024-11-20 05:32:47.276055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.730 [2024-11-20 05:32:47.276070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:15.730 [2024-11-20 05:32:47.276077] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.730 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.730 "name": "raid_bdev1", 00:22:15.730 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:15.730 "strip_size_kb": 0, 00:22:15.730 "state": "online", 00:22:15.730 "raid_level": "raid1", 00:22:15.730 "superblock": true, 00:22:15.730 "num_base_bdevs": 4, 00:22:15.730 "num_base_bdevs_discovered": 2, 00:22:15.730 "num_base_bdevs_operational": 2, 00:22:15.730 "base_bdevs_list": [ 00:22:15.730 { 00:22:15.730 "name": null, 00:22:15.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.730 "is_configured": false, 00:22:15.730 "data_offset": 0, 00:22:15.730 "data_size": 63488 00:22:15.730 }, 00:22:15.730 { 00:22:15.730 "name": null, 00:22:15.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.730 "is_configured": false, 00:22:15.730 "data_offset": 2048, 00:22:15.730 "data_size": 63488 00:22:15.730 }, 00:22:15.730 { 00:22:15.730 "name": "BaseBdev3", 00:22:15.730 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:15.730 "is_configured": true, 00:22:15.730 "data_offset": 2048, 00:22:15.730 "data_size": 63488 00:22:15.730 }, 00:22:15.730 { 00:22:15.730 "name": "BaseBdev4", 00:22:15.730 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:15.730 "is_configured": true, 00:22:15.730 "data_offset": 2048, 00:22:15.731 "data_size": 63488 00:22:15.731 } 00:22:15.731 ] 00:22:15.731 }' 00:22:15.731 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.731 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:15.988 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:15.988 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.988 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:15.988 [2024-11-20 05:32:47.625179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:15.988 [2024-11-20 05:32:47.625238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.988 [2024-11-20 05:32:47.625260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:15.988 [2024-11-20 05:32:47.625270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.988 [2024-11-20 05:32:47.625693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.988 [2024-11-20 05:32:47.625716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:15.988 [2024-11-20 05:32:47.625792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:15.989 [2024-11-20 05:32:47.625804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:22:15.989 [2024-11-20 05:32:47.625812] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:15.989 [2024-11-20 05:32:47.625829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:15.989 [2024-11-20 05:32:47.633781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:22:15.989 spare 00:22:15.989 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.989 05:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:15.989 [2024-11-20 05:32:47.635381] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:16.924 "name": "raid_bdev1", 00:22:16.924 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:16.924 "strip_size_kb": 0, 00:22:16.924 "state": "online", 00:22:16.924 "raid_level": "raid1", 00:22:16.924 "superblock": true, 00:22:16.924 "num_base_bdevs": 4, 00:22:16.924 "num_base_bdevs_discovered": 3, 00:22:16.924 "num_base_bdevs_operational": 3, 00:22:16.924 "process": { 00:22:16.924 "type": "rebuild", 00:22:16.924 "target": "spare", 00:22:16.924 "progress": { 00:22:16.924 "blocks": 20480, 00:22:16.924 "percent": 32 00:22:16.924 } 00:22:16.924 }, 00:22:16.924 "base_bdevs_list": [ 00:22:16.924 { 00:22:16.924 "name": "spare", 00:22:16.924 "uuid": "14b8dd6f-fdcf-57a2-834f-542a589629cd", 00:22:16.924 "is_configured": true, 00:22:16.924 "data_offset": 2048, 00:22:16.924 "data_size": 63488 00:22:16.924 }, 00:22:16.924 { 00:22:16.924 "name": null, 00:22:16.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.924 "is_configured": false, 00:22:16.924 "data_offset": 2048, 00:22:16.924 "data_size": 63488 00:22:16.924 }, 00:22:16.924 { 00:22:16.924 "name": "BaseBdev3", 00:22:16.924 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:16.924 "is_configured": true, 00:22:16.924 "data_offset": 2048, 00:22:16.924 "data_size": 63488 00:22:16.924 }, 00:22:16.924 { 00:22:16.924 "name": "BaseBdev4", 00:22:16.924 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:16.924 "is_configured": true, 00:22:16.924 "data_offset": 2048, 00:22:16.924 "data_size": 63488 00:22:16.924 } 00:22:16.924 ] 00:22:16.924 }' 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.924 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:16.924 [2024-11-20 05:32:48.741800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:17.184 [2024-11-20 05:32:48.840893] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:17.184 [2024-11-20 05:32:48.840960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.184 [2024-11-20 05:32:48.840974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:17.184 [2024-11-20 05:32:48.840980] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.184 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.185 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.185 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.185 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:17.185 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.185 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.185 "name": "raid_bdev1", 00:22:17.185 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:17.185 "strip_size_kb": 0, 00:22:17.185 "state": "online", 00:22:17.185 "raid_level": "raid1", 00:22:17.185 "superblock": true, 00:22:17.185 "num_base_bdevs": 4, 00:22:17.185 "num_base_bdevs_discovered": 2, 00:22:17.185 "num_base_bdevs_operational": 2, 00:22:17.185 "base_bdevs_list": [ 00:22:17.185 { 00:22:17.185 "name": null, 00:22:17.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.185 "is_configured": false, 00:22:17.185 "data_offset": 0, 00:22:17.185 "data_size": 63488 00:22:17.185 }, 00:22:17.185 { 00:22:17.185 "name": null, 00:22:17.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.185 "is_configured": false, 00:22:17.185 "data_offset": 2048, 00:22:17.185 "data_size": 63488 00:22:17.185 }, 00:22:17.185 { 00:22:17.185 "name": "BaseBdev3", 00:22:17.185 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:17.185 "is_configured": true, 00:22:17.185 "data_offset": 2048, 00:22:17.185 "data_size": 63488 00:22:17.185 }, 00:22:17.185 { 00:22:17.185 "name": "BaseBdev4", 00:22:17.185 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:17.185 "is_configured": true, 00:22:17.185 "data_offset": 2048, 00:22:17.185 "data_size": 63488 00:22:17.185 } 00:22:17.185 ] 00:22:17.185 }' 00:22:17.185 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.185 05:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.445 "name": "raid_bdev1", 00:22:17.445 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:17.445 "strip_size_kb": 0, 00:22:17.445 "state": "online", 00:22:17.445 "raid_level": "raid1", 00:22:17.445 "superblock": true, 00:22:17.445 "num_base_bdevs": 4, 00:22:17.445 "num_base_bdevs_discovered": 2, 00:22:17.445 "num_base_bdevs_operational": 2, 00:22:17.445 "base_bdevs_list": [ 00:22:17.445 { 00:22:17.445 "name": null, 00:22:17.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.445 "is_configured": false, 00:22:17.445 "data_offset": 0, 00:22:17.445 "data_size": 63488 00:22:17.445 }, 00:22:17.445 { 00:22:17.445 "name": null, 00:22:17.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.445 "is_configured": false, 00:22:17.445 "data_offset": 2048, 00:22:17.445 "data_size": 63488 00:22:17.445 }, 00:22:17.445 { 00:22:17.445 "name": "BaseBdev3", 00:22:17.445 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:17.445 "is_configured": true, 00:22:17.445 "data_offset": 2048, 00:22:17.445 "data_size": 63488 00:22:17.445 }, 00:22:17.445 { 00:22:17.445 "name": "BaseBdev4", 00:22:17.445 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:17.445 "is_configured": true, 00:22:17.445 "data_offset": 2048, 00:22:17.445 "data_size": 63488 00:22:17.445 } 00:22:17.445 ] 00:22:17.445 }' 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.445 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:17.774 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.774 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:17.774 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.774 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:17.774 [2024-11-20 05:32:49.282287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:17.774 [2024-11-20 05:32:49.282344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.774 [2024-11-20 05:32:49.282360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:22:17.774 [2024-11-20 05:32:49.282377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.774 [2024-11-20 05:32:49.282731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.774 [2024-11-20 05:32:49.282750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:17.774 [2024-11-20 05:32:49.282811] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:17.774 [2024-11-20 05:32:49.282822] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:22:17.774 [2024-11-20 05:32:49.282831] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:17.774 [2024-11-20 05:32:49.282839] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:17.774 BaseBdev1 00:22:17.774 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.774 05:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.708 "name": "raid_bdev1", 00:22:18.708 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:18.708 "strip_size_kb": 0, 00:22:18.708 "state": "online", 00:22:18.708 "raid_level": "raid1", 00:22:18.708 "superblock": true, 00:22:18.708 "num_base_bdevs": 4, 00:22:18.708 "num_base_bdevs_discovered": 2, 00:22:18.708 "num_base_bdevs_operational": 2, 00:22:18.708 "base_bdevs_list": [ 00:22:18.708 { 00:22:18.708 "name": null, 00:22:18.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.708 "is_configured": false, 00:22:18.708 "data_offset": 0, 00:22:18.708 "data_size": 63488 00:22:18.708 }, 00:22:18.708 { 00:22:18.708 "name": null, 00:22:18.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.708 "is_configured": false, 00:22:18.708 "data_offset": 2048, 00:22:18.708 "data_size": 63488 00:22:18.708 }, 00:22:18.708 { 00:22:18.708 "name": "BaseBdev3", 00:22:18.708 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:18.708 "is_configured": true, 00:22:18.708 "data_offset": 2048, 00:22:18.708 "data_size": 63488 00:22:18.708 }, 00:22:18.708 { 00:22:18.708 "name": "BaseBdev4", 00:22:18.708 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:18.708 "is_configured": true, 00:22:18.708 "data_offset": 2048, 00:22:18.708 "data_size": 63488 00:22:18.708 } 00:22:18.708 ] 00:22:18.708 }' 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.708 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:18.966 "name": "raid_bdev1", 00:22:18.966 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:18.966 "strip_size_kb": 0, 00:22:18.966 "state": "online", 00:22:18.966 "raid_level": "raid1", 00:22:18.966 "superblock": true, 00:22:18.966 "num_base_bdevs": 4, 00:22:18.966 "num_base_bdevs_discovered": 2, 00:22:18.966 "num_base_bdevs_operational": 2, 00:22:18.966 "base_bdevs_list": [ 00:22:18.966 { 00:22:18.966 "name": null, 00:22:18.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.966 "is_configured": false, 00:22:18.966 "data_offset": 0, 00:22:18.966 "data_size": 63488 00:22:18.966 }, 00:22:18.966 { 00:22:18.966 "name": null, 00:22:18.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.966 "is_configured": false, 00:22:18.966 "data_offset": 2048, 00:22:18.966 "data_size": 63488 00:22:18.966 }, 00:22:18.966 { 00:22:18.966 "name": "BaseBdev3", 00:22:18.966 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:18.966 "is_configured": true, 00:22:18.966 "data_offset": 2048, 00:22:18.966 "data_size": 63488 00:22:18.966 }, 00:22:18.966 { 00:22:18.966 "name": "BaseBdev4", 00:22:18.966 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:18.966 "is_configured": true, 00:22:18.966 "data_offset": 2048, 00:22:18.966 "data_size": 63488 00:22:18.966 } 00:22:18.966 ] 00:22:18.966 }' 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.966 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.967 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:18.967 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.967 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:18.967 [2024-11-20 05:32:50.694837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:18.967 [2024-11-20 05:32:50.695001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:22:18.967 [2024-11-20 05:32:50.695024] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:18.967 request: 00:22:18.967 { 00:22:18.967 "base_bdev": "BaseBdev1", 00:22:18.967 "raid_bdev": "raid_bdev1", 00:22:18.967 "method": "bdev_raid_add_base_bdev", 00:22:18.967 "req_id": 1 00:22:18.967 } 00:22:18.967 Got JSON-RPC error response 00:22:18.967 response: 00:22:18.967 { 00:22:18.967 "code": -22, 00:22:18.967 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:18.967 } 00:22:18.967 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.967 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:22:18.967 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.967 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.967 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.967 05:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:19.899 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:19.900 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.158 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.158 "name": "raid_bdev1", 00:22:20.158 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:20.158 "strip_size_kb": 0, 00:22:20.158 "state": "online", 00:22:20.158 "raid_level": "raid1", 00:22:20.158 "superblock": true, 00:22:20.158 "num_base_bdevs": 4, 00:22:20.158 "num_base_bdevs_discovered": 2, 00:22:20.158 "num_base_bdevs_operational": 2, 00:22:20.158 "base_bdevs_list": [ 00:22:20.158 { 00:22:20.158 "name": null, 00:22:20.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.158 "is_configured": false, 00:22:20.158 "data_offset": 0, 00:22:20.158 "data_size": 63488 00:22:20.158 }, 00:22:20.158 { 00:22:20.158 "name": null, 00:22:20.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.158 "is_configured": false, 00:22:20.158 "data_offset": 2048, 00:22:20.159 "data_size": 63488 00:22:20.159 }, 00:22:20.159 { 00:22:20.159 "name": "BaseBdev3", 00:22:20.159 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:20.159 "is_configured": true, 00:22:20.159 "data_offset": 2048, 00:22:20.159 "data_size": 63488 00:22:20.159 }, 00:22:20.159 { 00:22:20.159 "name": "BaseBdev4", 00:22:20.159 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:20.159 "is_configured": true, 00:22:20.159 "data_offset": 2048, 00:22:20.159 "data_size": 63488 00:22:20.159 } 00:22:20.159 ] 00:22:20.159 }' 00:22:20.159 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.159 05:32:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:20.418 "name": "raid_bdev1", 00:22:20.418 "uuid": "6b781076-c1f5-4aaf-9d74-af3b46da2ecb", 00:22:20.418 "strip_size_kb": 0, 00:22:20.418 "state": "online", 00:22:20.418 "raid_level": "raid1", 00:22:20.418 "superblock": true, 00:22:20.418 "num_base_bdevs": 4, 00:22:20.418 "num_base_bdevs_discovered": 2, 00:22:20.418 "num_base_bdevs_operational": 2, 00:22:20.418 "base_bdevs_list": [ 00:22:20.418 { 00:22:20.418 "name": null, 00:22:20.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.418 "is_configured": false, 00:22:20.418 "data_offset": 0, 00:22:20.418 "data_size": 63488 00:22:20.418 }, 00:22:20.418 { 00:22:20.418 "name": null, 00:22:20.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.418 "is_configured": false, 00:22:20.418 "data_offset": 2048, 00:22:20.418 "data_size": 63488 00:22:20.418 }, 00:22:20.418 { 00:22:20.418 "name": "BaseBdev3", 00:22:20.418 "uuid": "eb8b6450-b59a-57cf-8b43-e2de6653a497", 00:22:20.418 "is_configured": true, 00:22:20.418 "data_offset": 2048, 00:22:20.418 "data_size": 63488 00:22:20.418 }, 00:22:20.418 { 00:22:20.418 "name": "BaseBdev4", 00:22:20.418 "uuid": "e1677224-a954-59d8-ad79-12c21dc801e0", 00:22:20.418 "is_configured": true, 00:22:20.418 "data_offset": 2048, 00:22:20.418 "data_size": 63488 00:22:20.418 } 00:22:20.418 ] 00:22:20.418 }' 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77052 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77052 ']' 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77052 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77052 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:20.418 killing process with pid 77052 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77052' 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77052 00:22:20.418 Received shutdown signal, test time was about 17.075573 seconds 00:22:20.418 00:22:20.418 Latency(us) 00:22:20.418 [2024-11-20T05:32:52.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.418 [2024-11-20T05:32:52.253Z] =================================================================================================================== 00:22:20.418 [2024-11-20T05:32:52.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:20.418 05:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77052 00:22:20.418 [2024-11-20 05:32:52.164236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:20.418 [2024-11-20 05:32:52.164417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.418 [2024-11-20 05:32:52.164513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.418 [2024-11-20 05:32:52.164527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:20.782 [2024-11-20 05:32:52.445469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:21.720 05:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:22:21.720 00:22:21.720 real 0m19.724s 00:22:21.720 user 0m24.920s 00:22:21.720 sys 0m1.899s 00:22:21.720 05:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:21.720 ************************************ 00:22:21.720 END TEST raid_rebuild_test_sb_io 00:22:21.720 ************************************ 00:22:21.720 05:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:21.720 05:32:53 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:22:21.720 05:32:53 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:22:21.720 05:32:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:21.720 05:32:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:21.720 05:32:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:21.720 ************************************ 00:22:21.720 START TEST raid5f_state_function_test 00:22:21.720 ************************************ 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77765 00:22:21.720 Process raid pid: 77765 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77765' 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77765 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 77765 ']' 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:21.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:21.720 05:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.720 [2024-11-20 05:32:53.383709] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:22:21.720 [2024-11-20 05:32:53.383880] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.720 [2024-11-20 05:32:53.547067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.979 [2024-11-20 05:32:53.667700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.237 [2024-11-20 05:32:53.826344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.237 [2024-11-20 05:32:53.826415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.495 [2024-11-20 05:32:54.291540] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:22.495 [2024-11-20 05:32:54.291615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:22.495 [2024-11-20 05:32:54.291627] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:22.495 [2024-11-20 05:32:54.291638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:22.495 [2024-11-20 05:32:54.291649] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:22.495 [2024-11-20 05:32:54.291659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.495 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.753 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.753 "name": "Existed_Raid", 00:22:22.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.753 "strip_size_kb": 64, 00:22:22.753 "state": "configuring", 00:22:22.753 "raid_level": "raid5f", 00:22:22.753 "superblock": false, 00:22:22.753 "num_base_bdevs": 3, 00:22:22.753 "num_base_bdevs_discovered": 0, 00:22:22.753 "num_base_bdevs_operational": 3, 00:22:22.753 "base_bdevs_list": [ 00:22:22.753 { 00:22:22.753 "name": "BaseBdev1", 00:22:22.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.753 "is_configured": false, 00:22:22.753 "data_offset": 0, 00:22:22.753 "data_size": 0 00:22:22.753 }, 00:22:22.753 { 00:22:22.753 "name": "BaseBdev2", 00:22:22.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.753 "is_configured": false, 00:22:22.753 "data_offset": 0, 00:22:22.753 "data_size": 0 00:22:22.753 }, 00:22:22.753 { 00:22:22.753 "name": "BaseBdev3", 00:22:22.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.753 "is_configured": false, 00:22:22.753 "data_offset": 0, 00:22:22.753 "data_size": 0 00:22:22.753 } 00:22:22.753 ] 00:22:22.753 }' 00:22:22.753 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.753 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 [2024-11-20 05:32:54.627553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:23.012 [2024-11-20 05:32:54.627597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 [2024-11-20 05:32:54.635556] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:23.012 [2024-11-20 05:32:54.635609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:23.012 [2024-11-20 05:32:54.635621] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:23.012 [2024-11-20 05:32:54.635635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:23.012 [2024-11-20 05:32:54.635644] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:23.012 [2024-11-20 05:32:54.635655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 [2024-11-20 05:32:54.668750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:23.012 BaseBdev1 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.012 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 [ 00:22:23.012 { 00:22:23.012 "name": "BaseBdev1", 00:22:23.012 "aliases": [ 00:22:23.012 "98b84634-f6a5-49ef-8451-f705ea6a1464" 00:22:23.012 ], 00:22:23.012 "product_name": "Malloc disk", 00:22:23.012 "block_size": 512, 00:22:23.012 "num_blocks": 65536, 00:22:23.012 "uuid": "98b84634-f6a5-49ef-8451-f705ea6a1464", 00:22:23.012 "assigned_rate_limits": { 00:22:23.012 "rw_ios_per_sec": 0, 00:22:23.012 "rw_mbytes_per_sec": 0, 00:22:23.012 "r_mbytes_per_sec": 0, 00:22:23.012 "w_mbytes_per_sec": 0 00:22:23.012 }, 00:22:23.012 "claimed": true, 00:22:23.012 "claim_type": "exclusive_write", 00:22:23.012 "zoned": false, 00:22:23.012 "supported_io_types": { 00:22:23.013 "read": true, 00:22:23.013 "write": true, 00:22:23.013 "unmap": true, 00:22:23.013 "flush": true, 00:22:23.013 "reset": true, 00:22:23.013 "nvme_admin": false, 00:22:23.013 "nvme_io": false, 00:22:23.013 "nvme_io_md": false, 00:22:23.013 "write_zeroes": true, 00:22:23.013 "zcopy": true, 00:22:23.013 "get_zone_info": false, 00:22:23.013 "zone_management": false, 00:22:23.013 "zone_append": false, 00:22:23.013 "compare": false, 00:22:23.013 "compare_and_write": false, 00:22:23.013 "abort": true, 00:22:23.013 "seek_hole": false, 00:22:23.013 "seek_data": false, 00:22:23.013 "copy": true, 00:22:23.013 "nvme_iov_md": false 00:22:23.013 }, 00:22:23.013 "memory_domains": [ 00:22:23.013 { 00:22:23.013 "dma_device_id": "system", 00:22:23.013 "dma_device_type": 1 00:22:23.013 }, 00:22:23.013 { 00:22:23.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.013 "dma_device_type": 2 00:22:23.013 } 00:22:23.013 ], 00:22:23.013 "driver_specific": {} 00:22:23.013 } 00:22:23.013 ] 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.013 "name": "Existed_Raid", 00:22:23.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.013 "strip_size_kb": 64, 00:22:23.013 "state": "configuring", 00:22:23.013 "raid_level": "raid5f", 00:22:23.013 "superblock": false, 00:22:23.013 "num_base_bdevs": 3, 00:22:23.013 "num_base_bdevs_discovered": 1, 00:22:23.013 "num_base_bdevs_operational": 3, 00:22:23.013 "base_bdevs_list": [ 00:22:23.013 { 00:22:23.013 "name": "BaseBdev1", 00:22:23.013 "uuid": "98b84634-f6a5-49ef-8451-f705ea6a1464", 00:22:23.013 "is_configured": true, 00:22:23.013 "data_offset": 0, 00:22:23.013 "data_size": 65536 00:22:23.013 }, 00:22:23.013 { 00:22:23.013 "name": "BaseBdev2", 00:22:23.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.013 "is_configured": false, 00:22:23.013 "data_offset": 0, 00:22:23.013 "data_size": 0 00:22:23.013 }, 00:22:23.013 { 00:22:23.013 "name": "BaseBdev3", 00:22:23.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.013 "is_configured": false, 00:22:23.013 "data_offset": 0, 00:22:23.013 "data_size": 0 00:22:23.013 } 00:22:23.013 ] 00:22:23.013 }' 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.013 05:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.271 [2024-11-20 05:32:55.012884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:23.271 [2024-11-20 05:32:55.012938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.271 [2024-11-20 05:32:55.020949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:23.271 [2024-11-20 05:32:55.022818] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:23.271 [2024-11-20 05:32:55.022867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:23.271 [2024-11-20 05:32:55.022877] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:23.271 [2024-11-20 05:32:55.022886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.271 "name": "Existed_Raid", 00:22:23.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.271 "strip_size_kb": 64, 00:22:23.271 "state": "configuring", 00:22:23.271 "raid_level": "raid5f", 00:22:23.271 "superblock": false, 00:22:23.271 "num_base_bdevs": 3, 00:22:23.271 "num_base_bdevs_discovered": 1, 00:22:23.271 "num_base_bdevs_operational": 3, 00:22:23.271 "base_bdevs_list": [ 00:22:23.271 { 00:22:23.271 "name": "BaseBdev1", 00:22:23.271 "uuid": "98b84634-f6a5-49ef-8451-f705ea6a1464", 00:22:23.271 "is_configured": true, 00:22:23.271 "data_offset": 0, 00:22:23.271 "data_size": 65536 00:22:23.271 }, 00:22:23.271 { 00:22:23.271 "name": "BaseBdev2", 00:22:23.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.271 "is_configured": false, 00:22:23.271 "data_offset": 0, 00:22:23.271 "data_size": 0 00:22:23.271 }, 00:22:23.271 { 00:22:23.271 "name": "BaseBdev3", 00:22:23.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.271 "is_configured": false, 00:22:23.271 "data_offset": 0, 00:22:23.271 "data_size": 0 00:22:23.271 } 00:22:23.271 ] 00:22:23.271 }' 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.271 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.601 [2024-11-20 05:32:55.400736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:23.601 BaseBdev2 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.601 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.601 [ 00:22:23.601 { 00:22:23.601 "name": "BaseBdev2", 00:22:23.601 "aliases": [ 00:22:23.601 "eebcf9a1-bf34-4ef0-8f34-7b829120b511" 00:22:23.601 ], 00:22:23.601 "product_name": "Malloc disk", 00:22:23.601 "block_size": 512, 00:22:23.601 "num_blocks": 65536, 00:22:23.601 "uuid": "eebcf9a1-bf34-4ef0-8f34-7b829120b511", 00:22:23.601 "assigned_rate_limits": { 00:22:23.601 "rw_ios_per_sec": 0, 00:22:23.601 "rw_mbytes_per_sec": 0, 00:22:23.601 "r_mbytes_per_sec": 0, 00:22:23.601 "w_mbytes_per_sec": 0 00:22:23.601 }, 00:22:23.601 "claimed": true, 00:22:23.601 "claim_type": "exclusive_write", 00:22:23.601 "zoned": false, 00:22:23.601 "supported_io_types": { 00:22:23.601 "read": true, 00:22:23.601 "write": true, 00:22:23.601 "unmap": true, 00:22:23.601 "flush": true, 00:22:23.601 "reset": true, 00:22:23.601 "nvme_admin": false, 00:22:23.601 "nvme_io": false, 00:22:23.601 "nvme_io_md": false, 00:22:23.601 "write_zeroes": true, 00:22:23.601 "zcopy": true, 00:22:23.602 "get_zone_info": false, 00:22:23.602 "zone_management": false, 00:22:23.602 "zone_append": false, 00:22:23.602 "compare": false, 00:22:23.602 "compare_and_write": false, 00:22:23.602 "abort": true, 00:22:23.602 "seek_hole": false, 00:22:23.602 "seek_data": false, 00:22:23.602 "copy": true, 00:22:23.602 "nvme_iov_md": false 00:22:23.602 }, 00:22:23.602 "memory_domains": [ 00:22:23.602 { 00:22:23.602 "dma_device_id": "system", 00:22:23.602 "dma_device_type": 1 00:22:23.602 }, 00:22:23.602 { 00:22:23.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.602 "dma_device_type": 2 00:22:23.602 } 00:22:23.602 ], 00:22:23.602 "driver_specific": {} 00:22:23.602 } 00:22:23.602 ] 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.602 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.864 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.864 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.864 "name": "Existed_Raid", 00:22:23.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.864 "strip_size_kb": 64, 00:22:23.864 "state": "configuring", 00:22:23.864 "raid_level": "raid5f", 00:22:23.864 "superblock": false, 00:22:23.864 "num_base_bdevs": 3, 00:22:23.864 "num_base_bdevs_discovered": 2, 00:22:23.864 "num_base_bdevs_operational": 3, 00:22:23.864 "base_bdevs_list": [ 00:22:23.864 { 00:22:23.864 "name": "BaseBdev1", 00:22:23.864 "uuid": "98b84634-f6a5-49ef-8451-f705ea6a1464", 00:22:23.864 "is_configured": true, 00:22:23.864 "data_offset": 0, 00:22:23.864 "data_size": 65536 00:22:23.864 }, 00:22:23.864 { 00:22:23.864 "name": "BaseBdev2", 00:22:23.864 "uuid": "eebcf9a1-bf34-4ef0-8f34-7b829120b511", 00:22:23.864 "is_configured": true, 00:22:23.864 "data_offset": 0, 00:22:23.864 "data_size": 65536 00:22:23.864 }, 00:22:23.864 { 00:22:23.864 "name": "BaseBdev3", 00:22:23.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.864 "is_configured": false, 00:22:23.864 "data_offset": 0, 00:22:23.864 "data_size": 0 00:22:23.864 } 00:22:23.864 ] 00:22:23.864 }' 00:22:23.864 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.864 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.124 [2024-11-20 05:32:55.843921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:24.124 [2024-11-20 05:32:55.843975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:24.124 [2024-11-20 05:32:55.843987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:24.124 [2024-11-20 05:32:55.844249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:24.124 [2024-11-20 05:32:55.848060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:24.124 [2024-11-20 05:32:55.848084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:24.124 [2024-11-20 05:32:55.848349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.124 BaseBdev3 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.124 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.124 [ 00:22:24.124 { 00:22:24.124 "name": "BaseBdev3", 00:22:24.124 "aliases": [ 00:22:24.124 "990e7797-3d4b-4473-8a37-4777aa817c7f" 00:22:24.124 ], 00:22:24.124 "product_name": "Malloc disk", 00:22:24.124 "block_size": 512, 00:22:24.124 "num_blocks": 65536, 00:22:24.124 "uuid": "990e7797-3d4b-4473-8a37-4777aa817c7f", 00:22:24.124 "assigned_rate_limits": { 00:22:24.124 "rw_ios_per_sec": 0, 00:22:24.124 "rw_mbytes_per_sec": 0, 00:22:24.124 "r_mbytes_per_sec": 0, 00:22:24.124 "w_mbytes_per_sec": 0 00:22:24.124 }, 00:22:24.124 "claimed": true, 00:22:24.124 "claim_type": "exclusive_write", 00:22:24.124 "zoned": false, 00:22:24.124 "supported_io_types": { 00:22:24.124 "read": true, 00:22:24.124 "write": true, 00:22:24.124 "unmap": true, 00:22:24.124 "flush": true, 00:22:24.124 "reset": true, 00:22:24.124 "nvme_admin": false, 00:22:24.124 "nvme_io": false, 00:22:24.124 "nvme_io_md": false, 00:22:24.124 "write_zeroes": true, 00:22:24.124 "zcopy": true, 00:22:24.124 "get_zone_info": false, 00:22:24.124 "zone_management": false, 00:22:24.124 "zone_append": false, 00:22:24.124 "compare": false, 00:22:24.124 "compare_and_write": false, 00:22:24.124 "abort": true, 00:22:24.124 "seek_hole": false, 00:22:24.124 "seek_data": false, 00:22:24.124 "copy": true, 00:22:24.124 "nvme_iov_md": false 00:22:24.124 }, 00:22:24.125 "memory_domains": [ 00:22:24.125 { 00:22:24.125 "dma_device_id": "system", 00:22:24.125 "dma_device_type": 1 00:22:24.125 }, 00:22:24.125 { 00:22:24.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.125 "dma_device_type": 2 00:22:24.125 } 00:22:24.125 ], 00:22:24.125 "driver_specific": {} 00:22:24.125 } 00:22:24.125 ] 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.125 "name": "Existed_Raid", 00:22:24.125 "uuid": "602286e7-2db3-4bbc-bc97-29fa6f3b2970", 00:22:24.125 "strip_size_kb": 64, 00:22:24.125 "state": "online", 00:22:24.125 "raid_level": "raid5f", 00:22:24.125 "superblock": false, 00:22:24.125 "num_base_bdevs": 3, 00:22:24.125 "num_base_bdevs_discovered": 3, 00:22:24.125 "num_base_bdevs_operational": 3, 00:22:24.125 "base_bdevs_list": [ 00:22:24.125 { 00:22:24.125 "name": "BaseBdev1", 00:22:24.125 "uuid": "98b84634-f6a5-49ef-8451-f705ea6a1464", 00:22:24.125 "is_configured": true, 00:22:24.125 "data_offset": 0, 00:22:24.125 "data_size": 65536 00:22:24.125 }, 00:22:24.125 { 00:22:24.125 "name": "BaseBdev2", 00:22:24.125 "uuid": "eebcf9a1-bf34-4ef0-8f34-7b829120b511", 00:22:24.125 "is_configured": true, 00:22:24.125 "data_offset": 0, 00:22:24.125 "data_size": 65536 00:22:24.125 }, 00:22:24.125 { 00:22:24.125 "name": "BaseBdev3", 00:22:24.125 "uuid": "990e7797-3d4b-4473-8a37-4777aa817c7f", 00:22:24.125 "is_configured": true, 00:22:24.125 "data_offset": 0, 00:22:24.125 "data_size": 65536 00:22:24.125 } 00:22:24.125 ] 00:22:24.125 }' 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.125 05:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.690 [2024-11-20 05:32:56.236849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.690 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:24.690 "name": "Existed_Raid", 00:22:24.690 "aliases": [ 00:22:24.690 "602286e7-2db3-4bbc-bc97-29fa6f3b2970" 00:22:24.690 ], 00:22:24.690 "product_name": "Raid Volume", 00:22:24.690 "block_size": 512, 00:22:24.690 "num_blocks": 131072, 00:22:24.690 "uuid": "602286e7-2db3-4bbc-bc97-29fa6f3b2970", 00:22:24.690 "assigned_rate_limits": { 00:22:24.690 "rw_ios_per_sec": 0, 00:22:24.690 "rw_mbytes_per_sec": 0, 00:22:24.690 "r_mbytes_per_sec": 0, 00:22:24.690 "w_mbytes_per_sec": 0 00:22:24.690 }, 00:22:24.690 "claimed": false, 00:22:24.690 "zoned": false, 00:22:24.690 "supported_io_types": { 00:22:24.690 "read": true, 00:22:24.690 "write": true, 00:22:24.690 "unmap": false, 00:22:24.690 "flush": false, 00:22:24.690 "reset": true, 00:22:24.690 "nvme_admin": false, 00:22:24.690 "nvme_io": false, 00:22:24.690 "nvme_io_md": false, 00:22:24.690 "write_zeroes": true, 00:22:24.690 "zcopy": false, 00:22:24.690 "get_zone_info": false, 00:22:24.690 "zone_management": false, 00:22:24.690 "zone_append": false, 00:22:24.690 "compare": false, 00:22:24.690 "compare_and_write": false, 00:22:24.690 "abort": false, 00:22:24.690 "seek_hole": false, 00:22:24.690 "seek_data": false, 00:22:24.690 "copy": false, 00:22:24.690 "nvme_iov_md": false 00:22:24.690 }, 00:22:24.690 "driver_specific": { 00:22:24.690 "raid": { 00:22:24.690 "uuid": "602286e7-2db3-4bbc-bc97-29fa6f3b2970", 00:22:24.690 "strip_size_kb": 64, 00:22:24.690 "state": "online", 00:22:24.690 "raid_level": "raid5f", 00:22:24.690 "superblock": false, 00:22:24.690 "num_base_bdevs": 3, 00:22:24.690 "num_base_bdevs_discovered": 3, 00:22:24.690 "num_base_bdevs_operational": 3, 00:22:24.690 "base_bdevs_list": [ 00:22:24.690 { 00:22:24.690 "name": "BaseBdev1", 00:22:24.690 "uuid": "98b84634-f6a5-49ef-8451-f705ea6a1464", 00:22:24.690 "is_configured": true, 00:22:24.690 "data_offset": 0, 00:22:24.690 "data_size": 65536 00:22:24.690 }, 00:22:24.690 { 00:22:24.690 "name": "BaseBdev2", 00:22:24.690 "uuid": "eebcf9a1-bf34-4ef0-8f34-7b829120b511", 00:22:24.690 "is_configured": true, 00:22:24.690 "data_offset": 0, 00:22:24.690 "data_size": 65536 00:22:24.690 }, 00:22:24.690 { 00:22:24.690 "name": "BaseBdev3", 00:22:24.690 "uuid": "990e7797-3d4b-4473-8a37-4777aa817c7f", 00:22:24.690 "is_configured": true, 00:22:24.690 "data_offset": 0, 00:22:24.690 "data_size": 65536 00:22:24.690 } 00:22:24.690 ] 00:22:24.690 } 00:22:24.690 } 00:22:24.690 }' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:24.691 BaseBdev2 00:22:24.691 BaseBdev3' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.691 [2024-11-20 05:32:56.432691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.691 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.948 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.948 "name": "Existed_Raid", 00:22:24.948 "uuid": "602286e7-2db3-4bbc-bc97-29fa6f3b2970", 00:22:24.948 "strip_size_kb": 64, 00:22:24.948 "state": "online", 00:22:24.948 "raid_level": "raid5f", 00:22:24.948 "superblock": false, 00:22:24.948 "num_base_bdevs": 3, 00:22:24.948 "num_base_bdevs_discovered": 2, 00:22:24.948 "num_base_bdevs_operational": 2, 00:22:24.948 "base_bdevs_list": [ 00:22:24.948 { 00:22:24.948 "name": null, 00:22:24.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.948 "is_configured": false, 00:22:24.948 "data_offset": 0, 00:22:24.948 "data_size": 65536 00:22:24.948 }, 00:22:24.948 { 00:22:24.948 "name": "BaseBdev2", 00:22:24.948 "uuid": "eebcf9a1-bf34-4ef0-8f34-7b829120b511", 00:22:24.948 "is_configured": true, 00:22:24.948 "data_offset": 0, 00:22:24.948 "data_size": 65536 00:22:24.948 }, 00:22:24.948 { 00:22:24.948 "name": "BaseBdev3", 00:22:24.948 "uuid": "990e7797-3d4b-4473-8a37-4777aa817c7f", 00:22:24.948 "is_configured": true, 00:22:24.948 "data_offset": 0, 00:22:24.948 "data_size": 65536 00:22:24.948 } 00:22:24.948 ] 00:22:24.948 }' 00:22:24.948 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.948 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.207 [2024-11-20 05:32:56.896070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:25.207 [2024-11-20 05:32:56.896171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:25.207 [2024-11-20 05:32:56.956003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.207 05:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.207 [2024-11-20 05:32:56.996077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:25.207 [2024-11-20 05:32:56.996129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.467 BaseBdev2 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.467 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.467 [ 00:22:25.467 { 00:22:25.467 "name": "BaseBdev2", 00:22:25.467 "aliases": [ 00:22:25.467 "d4002f60-35f3-428e-bacb-119ab540aaba" 00:22:25.467 ], 00:22:25.467 "product_name": "Malloc disk", 00:22:25.467 "block_size": 512, 00:22:25.467 "num_blocks": 65536, 00:22:25.467 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:25.467 "assigned_rate_limits": { 00:22:25.467 "rw_ios_per_sec": 0, 00:22:25.467 "rw_mbytes_per_sec": 0, 00:22:25.467 "r_mbytes_per_sec": 0, 00:22:25.467 "w_mbytes_per_sec": 0 00:22:25.467 }, 00:22:25.467 "claimed": false, 00:22:25.467 "zoned": false, 00:22:25.467 "supported_io_types": { 00:22:25.467 "read": true, 00:22:25.467 "write": true, 00:22:25.467 "unmap": true, 00:22:25.467 "flush": true, 00:22:25.467 "reset": true, 00:22:25.467 "nvme_admin": false, 00:22:25.468 "nvme_io": false, 00:22:25.468 "nvme_io_md": false, 00:22:25.468 "write_zeroes": true, 00:22:25.468 "zcopy": true, 00:22:25.468 "get_zone_info": false, 00:22:25.468 "zone_management": false, 00:22:25.468 "zone_append": false, 00:22:25.468 "compare": false, 00:22:25.468 "compare_and_write": false, 00:22:25.468 "abort": true, 00:22:25.468 "seek_hole": false, 00:22:25.468 "seek_data": false, 00:22:25.468 "copy": true, 00:22:25.468 "nvme_iov_md": false 00:22:25.468 }, 00:22:25.468 "memory_domains": [ 00:22:25.468 { 00:22:25.468 "dma_device_id": "system", 00:22:25.468 "dma_device_type": 1 00:22:25.468 }, 00:22:25.468 { 00:22:25.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.468 "dma_device_type": 2 00:22:25.468 } 00:22:25.468 ], 00:22:25.468 "driver_specific": {} 00:22:25.468 } 00:22:25.468 ] 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.468 BaseBdev3 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.468 [ 00:22:25.468 { 00:22:25.468 "name": "BaseBdev3", 00:22:25.468 "aliases": [ 00:22:25.468 "be14c5a4-855c-4551-9619-8e5adde3ef9d" 00:22:25.468 ], 00:22:25.468 "product_name": "Malloc disk", 00:22:25.468 "block_size": 512, 00:22:25.468 "num_blocks": 65536, 00:22:25.468 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:25.468 "assigned_rate_limits": { 00:22:25.468 "rw_ios_per_sec": 0, 00:22:25.468 "rw_mbytes_per_sec": 0, 00:22:25.468 "r_mbytes_per_sec": 0, 00:22:25.468 "w_mbytes_per_sec": 0 00:22:25.468 }, 00:22:25.468 "claimed": false, 00:22:25.468 "zoned": false, 00:22:25.468 "supported_io_types": { 00:22:25.468 "read": true, 00:22:25.468 "write": true, 00:22:25.468 "unmap": true, 00:22:25.468 "flush": true, 00:22:25.468 "reset": true, 00:22:25.468 "nvme_admin": false, 00:22:25.468 "nvme_io": false, 00:22:25.468 "nvme_io_md": false, 00:22:25.468 "write_zeroes": true, 00:22:25.468 "zcopy": true, 00:22:25.468 "get_zone_info": false, 00:22:25.468 "zone_management": false, 00:22:25.468 "zone_append": false, 00:22:25.468 "compare": false, 00:22:25.468 "compare_and_write": false, 00:22:25.468 "abort": true, 00:22:25.468 "seek_hole": false, 00:22:25.468 "seek_data": false, 00:22:25.468 "copy": true, 00:22:25.468 "nvme_iov_md": false 00:22:25.468 }, 00:22:25.468 "memory_domains": [ 00:22:25.468 { 00:22:25.468 "dma_device_id": "system", 00:22:25.468 "dma_device_type": 1 00:22:25.468 }, 00:22:25.468 { 00:22:25.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.468 "dma_device_type": 2 00:22:25.468 } 00:22:25.468 ], 00:22:25.468 "driver_specific": {} 00:22:25.468 } 00:22:25.468 ] 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.468 [2024-11-20 05:32:57.200515] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:25.468 [2024-11-20 05:32:57.200563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:25.468 [2024-11-20 05:32:57.200586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.468 [2024-11-20 05:32:57.202454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.468 "name": "Existed_Raid", 00:22:25.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.468 "strip_size_kb": 64, 00:22:25.468 "state": "configuring", 00:22:25.468 "raid_level": "raid5f", 00:22:25.468 "superblock": false, 00:22:25.468 "num_base_bdevs": 3, 00:22:25.468 "num_base_bdevs_discovered": 2, 00:22:25.468 "num_base_bdevs_operational": 3, 00:22:25.468 "base_bdevs_list": [ 00:22:25.468 { 00:22:25.468 "name": "BaseBdev1", 00:22:25.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.468 "is_configured": false, 00:22:25.468 "data_offset": 0, 00:22:25.468 "data_size": 0 00:22:25.468 }, 00:22:25.468 { 00:22:25.468 "name": "BaseBdev2", 00:22:25.468 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:25.468 "is_configured": true, 00:22:25.468 "data_offset": 0, 00:22:25.468 "data_size": 65536 00:22:25.468 }, 00:22:25.468 { 00:22:25.468 "name": "BaseBdev3", 00:22:25.468 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:25.468 "is_configured": true, 00:22:25.468 "data_offset": 0, 00:22:25.468 "data_size": 65536 00:22:25.468 } 00:22:25.468 ] 00:22:25.468 }' 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.468 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.727 [2024-11-20 05:32:57.544584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.727 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.988 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.988 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.988 "name": "Existed_Raid", 00:22:25.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.988 "strip_size_kb": 64, 00:22:25.988 "state": "configuring", 00:22:25.988 "raid_level": "raid5f", 00:22:25.988 "superblock": false, 00:22:25.988 "num_base_bdevs": 3, 00:22:25.988 "num_base_bdevs_discovered": 1, 00:22:25.988 "num_base_bdevs_operational": 3, 00:22:25.988 "base_bdevs_list": [ 00:22:25.988 { 00:22:25.988 "name": "BaseBdev1", 00:22:25.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.988 "is_configured": false, 00:22:25.988 "data_offset": 0, 00:22:25.988 "data_size": 0 00:22:25.988 }, 00:22:25.988 { 00:22:25.988 "name": null, 00:22:25.988 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:25.988 "is_configured": false, 00:22:25.988 "data_offset": 0, 00:22:25.988 "data_size": 65536 00:22:25.988 }, 00:22:25.988 { 00:22:25.988 "name": "BaseBdev3", 00:22:25.988 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:25.988 "is_configured": true, 00:22:25.988 "data_offset": 0, 00:22:25.988 "data_size": 65536 00:22:25.988 } 00:22:25.988 ] 00:22:25.988 }' 00:22:25.988 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.988 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.246 [2024-11-20 05:32:57.959205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:26.246 BaseBdev1 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.246 [ 00:22:26.246 { 00:22:26.246 "name": "BaseBdev1", 00:22:26.246 "aliases": [ 00:22:26.246 "24c95f98-30f8-426a-ac82-90866fa32bd2" 00:22:26.246 ], 00:22:26.246 "product_name": "Malloc disk", 00:22:26.246 "block_size": 512, 00:22:26.246 "num_blocks": 65536, 00:22:26.246 "uuid": "24c95f98-30f8-426a-ac82-90866fa32bd2", 00:22:26.246 "assigned_rate_limits": { 00:22:26.246 "rw_ios_per_sec": 0, 00:22:26.246 "rw_mbytes_per_sec": 0, 00:22:26.246 "r_mbytes_per_sec": 0, 00:22:26.246 "w_mbytes_per_sec": 0 00:22:26.246 }, 00:22:26.246 "claimed": true, 00:22:26.246 "claim_type": "exclusive_write", 00:22:26.246 "zoned": false, 00:22:26.246 "supported_io_types": { 00:22:26.246 "read": true, 00:22:26.246 "write": true, 00:22:26.246 "unmap": true, 00:22:26.246 "flush": true, 00:22:26.246 "reset": true, 00:22:26.246 "nvme_admin": false, 00:22:26.246 "nvme_io": false, 00:22:26.246 "nvme_io_md": false, 00:22:26.246 "write_zeroes": true, 00:22:26.246 "zcopy": true, 00:22:26.246 "get_zone_info": false, 00:22:26.246 "zone_management": false, 00:22:26.246 "zone_append": false, 00:22:26.246 "compare": false, 00:22:26.246 "compare_and_write": false, 00:22:26.246 "abort": true, 00:22:26.246 "seek_hole": false, 00:22:26.246 "seek_data": false, 00:22:26.246 "copy": true, 00:22:26.246 "nvme_iov_md": false 00:22:26.246 }, 00:22:26.246 "memory_domains": [ 00:22:26.246 { 00:22:26.246 "dma_device_id": "system", 00:22:26.246 "dma_device_type": 1 00:22:26.246 }, 00:22:26.246 { 00:22:26.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.246 "dma_device_type": 2 00:22:26.246 } 00:22:26.246 ], 00:22:26.246 "driver_specific": {} 00:22:26.246 } 00:22:26.246 ] 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.246 05:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.246 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.246 "name": "Existed_Raid", 00:22:26.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.246 "strip_size_kb": 64, 00:22:26.246 "state": "configuring", 00:22:26.246 "raid_level": "raid5f", 00:22:26.246 "superblock": false, 00:22:26.246 "num_base_bdevs": 3, 00:22:26.246 "num_base_bdevs_discovered": 2, 00:22:26.246 "num_base_bdevs_operational": 3, 00:22:26.246 "base_bdevs_list": [ 00:22:26.246 { 00:22:26.246 "name": "BaseBdev1", 00:22:26.246 "uuid": "24c95f98-30f8-426a-ac82-90866fa32bd2", 00:22:26.246 "is_configured": true, 00:22:26.246 "data_offset": 0, 00:22:26.246 "data_size": 65536 00:22:26.246 }, 00:22:26.246 { 00:22:26.246 "name": null, 00:22:26.246 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:26.246 "is_configured": false, 00:22:26.246 "data_offset": 0, 00:22:26.246 "data_size": 65536 00:22:26.246 }, 00:22:26.246 { 00:22:26.246 "name": "BaseBdev3", 00:22:26.246 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:26.246 "is_configured": true, 00:22:26.246 "data_offset": 0, 00:22:26.247 "data_size": 65536 00:22:26.247 } 00:22:26.247 ] 00:22:26.247 }' 00:22:26.247 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.247 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.527 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.527 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.527 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:26.527 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.785 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.786 [2024-11-20 05:32:58.383373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.786 "name": "Existed_Raid", 00:22:26.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.786 "strip_size_kb": 64, 00:22:26.786 "state": "configuring", 00:22:26.786 "raid_level": "raid5f", 00:22:26.786 "superblock": false, 00:22:26.786 "num_base_bdevs": 3, 00:22:26.786 "num_base_bdevs_discovered": 1, 00:22:26.786 "num_base_bdevs_operational": 3, 00:22:26.786 "base_bdevs_list": [ 00:22:26.786 { 00:22:26.786 "name": "BaseBdev1", 00:22:26.786 "uuid": "24c95f98-30f8-426a-ac82-90866fa32bd2", 00:22:26.786 "is_configured": true, 00:22:26.786 "data_offset": 0, 00:22:26.786 "data_size": 65536 00:22:26.786 }, 00:22:26.786 { 00:22:26.786 "name": null, 00:22:26.786 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:26.786 "is_configured": false, 00:22:26.786 "data_offset": 0, 00:22:26.786 "data_size": 65536 00:22:26.786 }, 00:22:26.786 { 00:22:26.786 "name": null, 00:22:26.786 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:26.786 "is_configured": false, 00:22:26.786 "data_offset": 0, 00:22:26.786 "data_size": 65536 00:22:26.786 } 00:22:26.786 ] 00:22:26.786 }' 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.786 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.044 [2024-11-20 05:32:58.755494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.044 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.044 "name": "Existed_Raid", 00:22:27.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.044 "strip_size_kb": 64, 00:22:27.044 "state": "configuring", 00:22:27.044 "raid_level": "raid5f", 00:22:27.044 "superblock": false, 00:22:27.044 "num_base_bdevs": 3, 00:22:27.044 "num_base_bdevs_discovered": 2, 00:22:27.044 "num_base_bdevs_operational": 3, 00:22:27.044 "base_bdevs_list": [ 00:22:27.044 { 00:22:27.044 "name": "BaseBdev1", 00:22:27.044 "uuid": "24c95f98-30f8-426a-ac82-90866fa32bd2", 00:22:27.044 "is_configured": true, 00:22:27.044 "data_offset": 0, 00:22:27.044 "data_size": 65536 00:22:27.044 }, 00:22:27.044 { 00:22:27.045 "name": null, 00:22:27.045 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:27.045 "is_configured": false, 00:22:27.045 "data_offset": 0, 00:22:27.045 "data_size": 65536 00:22:27.045 }, 00:22:27.045 { 00:22:27.045 "name": "BaseBdev3", 00:22:27.045 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:27.045 "is_configured": true, 00:22:27.045 "data_offset": 0, 00:22:27.045 "data_size": 65536 00:22:27.045 } 00:22:27.045 ] 00:22:27.045 }' 00:22:27.045 05:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.045 05:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.302 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.302 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:27.302 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.302 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.302 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.302 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:27.303 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:27.303 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.303 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.303 [2024-11-20 05:32:59.107577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.561 "name": "Existed_Raid", 00:22:27.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.561 "strip_size_kb": 64, 00:22:27.561 "state": "configuring", 00:22:27.561 "raid_level": "raid5f", 00:22:27.561 "superblock": false, 00:22:27.561 "num_base_bdevs": 3, 00:22:27.561 "num_base_bdevs_discovered": 1, 00:22:27.561 "num_base_bdevs_operational": 3, 00:22:27.561 "base_bdevs_list": [ 00:22:27.561 { 00:22:27.561 "name": null, 00:22:27.561 "uuid": "24c95f98-30f8-426a-ac82-90866fa32bd2", 00:22:27.561 "is_configured": false, 00:22:27.561 "data_offset": 0, 00:22:27.561 "data_size": 65536 00:22:27.561 }, 00:22:27.561 { 00:22:27.561 "name": null, 00:22:27.561 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:27.561 "is_configured": false, 00:22:27.561 "data_offset": 0, 00:22:27.561 "data_size": 65536 00:22:27.561 }, 00:22:27.561 { 00:22:27.561 "name": "BaseBdev3", 00:22:27.561 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:27.561 "is_configured": true, 00:22:27.561 "data_offset": 0, 00:22:27.561 "data_size": 65536 00:22:27.561 } 00:22:27.561 ] 00:22:27.561 }' 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.561 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.819 [2024-11-20 05:32:59.547897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:27.819 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.820 "name": "Existed_Raid", 00:22:27.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.820 "strip_size_kb": 64, 00:22:27.820 "state": "configuring", 00:22:27.820 "raid_level": "raid5f", 00:22:27.820 "superblock": false, 00:22:27.820 "num_base_bdevs": 3, 00:22:27.820 "num_base_bdevs_discovered": 2, 00:22:27.820 "num_base_bdevs_operational": 3, 00:22:27.820 "base_bdevs_list": [ 00:22:27.820 { 00:22:27.820 "name": null, 00:22:27.820 "uuid": "24c95f98-30f8-426a-ac82-90866fa32bd2", 00:22:27.820 "is_configured": false, 00:22:27.820 "data_offset": 0, 00:22:27.820 "data_size": 65536 00:22:27.820 }, 00:22:27.820 { 00:22:27.820 "name": "BaseBdev2", 00:22:27.820 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:27.820 "is_configured": true, 00:22:27.820 "data_offset": 0, 00:22:27.820 "data_size": 65536 00:22:27.820 }, 00:22:27.820 { 00:22:27.820 "name": "BaseBdev3", 00:22:27.820 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:27.820 "is_configured": true, 00:22:27.820 "data_offset": 0, 00:22:27.820 "data_size": 65536 00:22:27.820 } 00:22:27.820 ] 00:22:27.820 }' 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.820 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.079 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.079 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.079 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.079 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:28.079 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 24c95f98-30f8-426a-ac82-90866fa32bd2 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.338 [2024-11-20 05:32:59.974517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:28.338 [2024-11-20 05:32:59.974565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:28.338 [2024-11-20 05:32:59.974574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:28.338 [2024-11-20 05:32:59.974823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:28.338 [2024-11-20 05:32:59.978354] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:28.338 [2024-11-20 05:32:59.978386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:28.338 [2024-11-20 05:32:59.978632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.338 NewBaseBdev 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.338 05:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.338 [ 00:22:28.338 { 00:22:28.338 "name": "NewBaseBdev", 00:22:28.338 "aliases": [ 00:22:28.338 "24c95f98-30f8-426a-ac82-90866fa32bd2" 00:22:28.338 ], 00:22:28.338 "product_name": "Malloc disk", 00:22:28.338 "block_size": 512, 00:22:28.339 "num_blocks": 65536, 00:22:28.339 "uuid": "24c95f98-30f8-426a-ac82-90866fa32bd2", 00:22:28.339 "assigned_rate_limits": { 00:22:28.339 "rw_ios_per_sec": 0, 00:22:28.339 "rw_mbytes_per_sec": 0, 00:22:28.339 "r_mbytes_per_sec": 0, 00:22:28.339 "w_mbytes_per_sec": 0 00:22:28.339 }, 00:22:28.339 "claimed": true, 00:22:28.339 "claim_type": "exclusive_write", 00:22:28.339 "zoned": false, 00:22:28.339 "supported_io_types": { 00:22:28.339 "read": true, 00:22:28.339 "write": true, 00:22:28.339 "unmap": true, 00:22:28.339 "flush": true, 00:22:28.339 "reset": true, 00:22:28.339 "nvme_admin": false, 00:22:28.339 "nvme_io": false, 00:22:28.339 "nvme_io_md": false, 00:22:28.339 "write_zeroes": true, 00:22:28.339 "zcopy": true, 00:22:28.339 "get_zone_info": false, 00:22:28.339 "zone_management": false, 00:22:28.339 "zone_append": false, 00:22:28.339 "compare": false, 00:22:28.339 "compare_and_write": false, 00:22:28.339 "abort": true, 00:22:28.339 "seek_hole": false, 00:22:28.339 "seek_data": false, 00:22:28.339 "copy": true, 00:22:28.339 "nvme_iov_md": false 00:22:28.339 }, 00:22:28.339 "memory_domains": [ 00:22:28.339 { 00:22:28.339 "dma_device_id": "system", 00:22:28.339 "dma_device_type": 1 00:22:28.339 }, 00:22:28.339 { 00:22:28.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.339 "dma_device_type": 2 00:22:28.339 } 00:22:28.339 ], 00:22:28.339 "driver_specific": {} 00:22:28.339 } 00:22:28.339 ] 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.339 "name": "Existed_Raid", 00:22:28.339 "uuid": "46ba843d-a420-46e0-867f-e415dbb25b51", 00:22:28.339 "strip_size_kb": 64, 00:22:28.339 "state": "online", 00:22:28.339 "raid_level": "raid5f", 00:22:28.339 "superblock": false, 00:22:28.339 "num_base_bdevs": 3, 00:22:28.339 "num_base_bdevs_discovered": 3, 00:22:28.339 "num_base_bdevs_operational": 3, 00:22:28.339 "base_bdevs_list": [ 00:22:28.339 { 00:22:28.339 "name": "NewBaseBdev", 00:22:28.339 "uuid": "24c95f98-30f8-426a-ac82-90866fa32bd2", 00:22:28.339 "is_configured": true, 00:22:28.339 "data_offset": 0, 00:22:28.339 "data_size": 65536 00:22:28.339 }, 00:22:28.339 { 00:22:28.339 "name": "BaseBdev2", 00:22:28.339 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:28.339 "is_configured": true, 00:22:28.339 "data_offset": 0, 00:22:28.339 "data_size": 65536 00:22:28.339 }, 00:22:28.339 { 00:22:28.339 "name": "BaseBdev3", 00:22:28.339 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:28.339 "is_configured": true, 00:22:28.339 "data_offset": 0, 00:22:28.339 "data_size": 65536 00:22:28.339 } 00:22:28.339 ] 00:22:28.339 }' 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.339 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.597 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:28.597 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:28.597 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:28.597 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:28.597 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.598 [2024-11-20 05:33:00.338828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:28.598 "name": "Existed_Raid", 00:22:28.598 "aliases": [ 00:22:28.598 "46ba843d-a420-46e0-867f-e415dbb25b51" 00:22:28.598 ], 00:22:28.598 "product_name": "Raid Volume", 00:22:28.598 "block_size": 512, 00:22:28.598 "num_blocks": 131072, 00:22:28.598 "uuid": "46ba843d-a420-46e0-867f-e415dbb25b51", 00:22:28.598 "assigned_rate_limits": { 00:22:28.598 "rw_ios_per_sec": 0, 00:22:28.598 "rw_mbytes_per_sec": 0, 00:22:28.598 "r_mbytes_per_sec": 0, 00:22:28.598 "w_mbytes_per_sec": 0 00:22:28.598 }, 00:22:28.598 "claimed": false, 00:22:28.598 "zoned": false, 00:22:28.598 "supported_io_types": { 00:22:28.598 "read": true, 00:22:28.598 "write": true, 00:22:28.598 "unmap": false, 00:22:28.598 "flush": false, 00:22:28.598 "reset": true, 00:22:28.598 "nvme_admin": false, 00:22:28.598 "nvme_io": false, 00:22:28.598 "nvme_io_md": false, 00:22:28.598 "write_zeroes": true, 00:22:28.598 "zcopy": false, 00:22:28.598 "get_zone_info": false, 00:22:28.598 "zone_management": false, 00:22:28.598 "zone_append": false, 00:22:28.598 "compare": false, 00:22:28.598 "compare_and_write": false, 00:22:28.598 "abort": false, 00:22:28.598 "seek_hole": false, 00:22:28.598 "seek_data": false, 00:22:28.598 "copy": false, 00:22:28.598 "nvme_iov_md": false 00:22:28.598 }, 00:22:28.598 "driver_specific": { 00:22:28.598 "raid": { 00:22:28.598 "uuid": "46ba843d-a420-46e0-867f-e415dbb25b51", 00:22:28.598 "strip_size_kb": 64, 00:22:28.598 "state": "online", 00:22:28.598 "raid_level": "raid5f", 00:22:28.598 "superblock": false, 00:22:28.598 "num_base_bdevs": 3, 00:22:28.598 "num_base_bdevs_discovered": 3, 00:22:28.598 "num_base_bdevs_operational": 3, 00:22:28.598 "base_bdevs_list": [ 00:22:28.598 { 00:22:28.598 "name": "NewBaseBdev", 00:22:28.598 "uuid": "24c95f98-30f8-426a-ac82-90866fa32bd2", 00:22:28.598 "is_configured": true, 00:22:28.598 "data_offset": 0, 00:22:28.598 "data_size": 65536 00:22:28.598 }, 00:22:28.598 { 00:22:28.598 "name": "BaseBdev2", 00:22:28.598 "uuid": "d4002f60-35f3-428e-bacb-119ab540aaba", 00:22:28.598 "is_configured": true, 00:22:28.598 "data_offset": 0, 00:22:28.598 "data_size": 65536 00:22:28.598 }, 00:22:28.598 { 00:22:28.598 "name": "BaseBdev3", 00:22:28.598 "uuid": "be14c5a4-855c-4551-9619-8e5adde3ef9d", 00:22:28.598 "is_configured": true, 00:22:28.598 "data_offset": 0, 00:22:28.598 "data_size": 65536 00:22:28.598 } 00:22:28.598 ] 00:22:28.598 } 00:22:28.598 } 00:22:28.598 }' 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:28.598 BaseBdev2 00:22:28.598 BaseBdev3' 00:22:28.598 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.856 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.857 [2024-11-20 05:33:00.534708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:28.857 [2024-11-20 05:33:00.534739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:28.857 [2024-11-20 05:33:00.534799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:28.857 [2024-11-20 05:33:00.535023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:28.857 [2024-11-20 05:33:00.535046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77765 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 77765 ']' 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 77765 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77765 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:28.857 killing process with pid 77765 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77765' 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 77765 00:22:28.857 [2024-11-20 05:33:00.563112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:28.857 05:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 77765 00:22:29.116 [2024-11-20 05:33:00.714012] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:29.683 00:22:29.683 real 0m7.983s 00:22:29.683 user 0m12.866s 00:22:29.683 sys 0m1.392s 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.683 ************************************ 00:22:29.683 END TEST raid5f_state_function_test 00:22:29.683 ************************************ 00:22:29.683 05:33:01 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:22:29.683 05:33:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:29.683 05:33:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:29.683 05:33:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:29.683 ************************************ 00:22:29.683 START TEST raid5f_state_function_test_sb 00:22:29.683 ************************************ 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78359 00:22:29.683 Process raid pid: 78359 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78359' 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78359 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78359 ']' 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:29.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:29.683 05:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.683 [2024-11-20 05:33:01.406824] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:22:29.684 [2024-11-20 05:33:01.406984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.941 [2024-11-20 05:33:01.568969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.941 [2024-11-20 05:33:01.671121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.199 [2024-11-20 05:33:01.809478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:30.199 [2024-11-20 05:33:01.809516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.457 [2024-11-20 05:33:02.211674] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:30.457 [2024-11-20 05:33:02.211730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:30.457 [2024-11-20 05:33:02.211740] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:30.457 [2024-11-20 05:33:02.211749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:30.457 [2024-11-20 05:33:02.211756] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:30.457 [2024-11-20 05:33:02.211765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.457 "name": "Existed_Raid", 00:22:30.457 "uuid": "d77a23aa-584d-4522-8bfd-6c0e16287716", 00:22:30.457 "strip_size_kb": 64, 00:22:30.457 "state": "configuring", 00:22:30.457 "raid_level": "raid5f", 00:22:30.457 "superblock": true, 00:22:30.457 "num_base_bdevs": 3, 00:22:30.457 "num_base_bdevs_discovered": 0, 00:22:30.457 "num_base_bdevs_operational": 3, 00:22:30.457 "base_bdevs_list": [ 00:22:30.457 { 00:22:30.457 "name": "BaseBdev1", 00:22:30.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.457 "is_configured": false, 00:22:30.457 "data_offset": 0, 00:22:30.457 "data_size": 0 00:22:30.457 }, 00:22:30.457 { 00:22:30.457 "name": "BaseBdev2", 00:22:30.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.457 "is_configured": false, 00:22:30.457 "data_offset": 0, 00:22:30.457 "data_size": 0 00:22:30.457 }, 00:22:30.457 { 00:22:30.457 "name": "BaseBdev3", 00:22:30.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.457 "is_configured": false, 00:22:30.457 "data_offset": 0, 00:22:30.457 "data_size": 0 00:22:30.457 } 00:22:30.457 ] 00:22:30.457 }' 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.457 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.715 [2024-11-20 05:33:02.527756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:30.715 [2024-11-20 05:33:02.527811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.715 [2024-11-20 05:33:02.535714] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:30.715 [2024-11-20 05:33:02.535764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:30.715 [2024-11-20 05:33:02.535773] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:30.715 [2024-11-20 05:33:02.535787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:30.715 [2024-11-20 05:33:02.535797] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:30.715 [2024-11-20 05:33:02.535810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.715 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.973 [2024-11-20 05:33:02.568299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:30.973 BaseBdev1 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.973 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.973 [ 00:22:30.973 { 00:22:30.973 "name": "BaseBdev1", 00:22:30.973 "aliases": [ 00:22:30.973 "93c18b3a-e95e-4891-a009-45bc15394704" 00:22:30.973 ], 00:22:30.973 "product_name": "Malloc disk", 00:22:30.973 "block_size": 512, 00:22:30.973 "num_blocks": 65536, 00:22:30.973 "uuid": "93c18b3a-e95e-4891-a009-45bc15394704", 00:22:30.973 "assigned_rate_limits": { 00:22:30.973 "rw_ios_per_sec": 0, 00:22:30.973 "rw_mbytes_per_sec": 0, 00:22:30.973 "r_mbytes_per_sec": 0, 00:22:30.973 "w_mbytes_per_sec": 0 00:22:30.973 }, 00:22:30.973 "claimed": true, 00:22:30.973 "claim_type": "exclusive_write", 00:22:30.973 "zoned": false, 00:22:30.973 "supported_io_types": { 00:22:30.973 "read": true, 00:22:30.973 "write": true, 00:22:30.973 "unmap": true, 00:22:30.973 "flush": true, 00:22:30.973 "reset": true, 00:22:30.973 "nvme_admin": false, 00:22:30.973 "nvme_io": false, 00:22:30.973 "nvme_io_md": false, 00:22:30.973 "write_zeroes": true, 00:22:30.973 "zcopy": true, 00:22:30.973 "get_zone_info": false, 00:22:30.973 "zone_management": false, 00:22:30.973 "zone_append": false, 00:22:30.973 "compare": false, 00:22:30.973 "compare_and_write": false, 00:22:30.973 "abort": true, 00:22:30.973 "seek_hole": false, 00:22:30.973 "seek_data": false, 00:22:30.974 "copy": true, 00:22:30.974 "nvme_iov_md": false 00:22:30.974 }, 00:22:30.974 "memory_domains": [ 00:22:30.974 { 00:22:30.974 "dma_device_id": "system", 00:22:30.974 "dma_device_type": 1 00:22:30.974 }, 00:22:30.974 { 00:22:30.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.974 "dma_device_type": 2 00:22:30.974 } 00:22:30.974 ], 00:22:30.974 "driver_specific": {} 00:22:30.974 } 00:22:30.974 ] 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.974 "name": "Existed_Raid", 00:22:30.974 "uuid": "0ec8f2a3-7a39-49af-a3b0-65224af9b284", 00:22:30.974 "strip_size_kb": 64, 00:22:30.974 "state": "configuring", 00:22:30.974 "raid_level": "raid5f", 00:22:30.974 "superblock": true, 00:22:30.974 "num_base_bdevs": 3, 00:22:30.974 "num_base_bdevs_discovered": 1, 00:22:30.974 "num_base_bdevs_operational": 3, 00:22:30.974 "base_bdevs_list": [ 00:22:30.974 { 00:22:30.974 "name": "BaseBdev1", 00:22:30.974 "uuid": "93c18b3a-e95e-4891-a009-45bc15394704", 00:22:30.974 "is_configured": true, 00:22:30.974 "data_offset": 2048, 00:22:30.974 "data_size": 63488 00:22:30.974 }, 00:22:30.974 { 00:22:30.974 "name": "BaseBdev2", 00:22:30.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.974 "is_configured": false, 00:22:30.974 "data_offset": 0, 00:22:30.974 "data_size": 0 00:22:30.974 }, 00:22:30.974 { 00:22:30.974 "name": "BaseBdev3", 00:22:30.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.974 "is_configured": false, 00:22:30.974 "data_offset": 0, 00:22:30.974 "data_size": 0 00:22:30.974 } 00:22:30.974 ] 00:22:30.974 }' 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.974 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.230 [2024-11-20 05:33:02.896439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:31.230 [2024-11-20 05:33:02.896484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.230 [2024-11-20 05:33:02.904495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:31.230 [2024-11-20 05:33:02.906329] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:31.230 [2024-11-20 05:33:02.906382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:31.230 [2024-11-20 05:33:02.906391] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:31.230 [2024-11-20 05:33:02.906401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.230 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.231 "name": "Existed_Raid", 00:22:31.231 "uuid": "d1621d01-1e37-4d33-8010-fbfe9b21746b", 00:22:31.231 "strip_size_kb": 64, 00:22:31.231 "state": "configuring", 00:22:31.231 "raid_level": "raid5f", 00:22:31.231 "superblock": true, 00:22:31.231 "num_base_bdevs": 3, 00:22:31.231 "num_base_bdevs_discovered": 1, 00:22:31.231 "num_base_bdevs_operational": 3, 00:22:31.231 "base_bdevs_list": [ 00:22:31.231 { 00:22:31.231 "name": "BaseBdev1", 00:22:31.231 "uuid": "93c18b3a-e95e-4891-a009-45bc15394704", 00:22:31.231 "is_configured": true, 00:22:31.231 "data_offset": 2048, 00:22:31.231 "data_size": 63488 00:22:31.231 }, 00:22:31.231 { 00:22:31.231 "name": "BaseBdev2", 00:22:31.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.231 "is_configured": false, 00:22:31.231 "data_offset": 0, 00:22:31.231 "data_size": 0 00:22:31.231 }, 00:22:31.231 { 00:22:31.231 "name": "BaseBdev3", 00:22:31.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.231 "is_configured": false, 00:22:31.231 "data_offset": 0, 00:22:31.231 "data_size": 0 00:22:31.231 } 00:22:31.231 ] 00:22:31.231 }' 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.231 05:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.489 [2024-11-20 05:33:03.231157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:31.489 BaseBdev2 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.489 [ 00:22:31.489 { 00:22:31.489 "name": "BaseBdev2", 00:22:31.489 "aliases": [ 00:22:31.489 "7e460245-5dd7-4c02-abdf-c860467afbe7" 00:22:31.489 ], 00:22:31.489 "product_name": "Malloc disk", 00:22:31.489 "block_size": 512, 00:22:31.489 "num_blocks": 65536, 00:22:31.489 "uuid": "7e460245-5dd7-4c02-abdf-c860467afbe7", 00:22:31.489 "assigned_rate_limits": { 00:22:31.489 "rw_ios_per_sec": 0, 00:22:31.489 "rw_mbytes_per_sec": 0, 00:22:31.489 "r_mbytes_per_sec": 0, 00:22:31.489 "w_mbytes_per_sec": 0 00:22:31.489 }, 00:22:31.489 "claimed": true, 00:22:31.489 "claim_type": "exclusive_write", 00:22:31.489 "zoned": false, 00:22:31.489 "supported_io_types": { 00:22:31.489 "read": true, 00:22:31.489 "write": true, 00:22:31.489 "unmap": true, 00:22:31.489 "flush": true, 00:22:31.489 "reset": true, 00:22:31.489 "nvme_admin": false, 00:22:31.489 "nvme_io": false, 00:22:31.489 "nvme_io_md": false, 00:22:31.489 "write_zeroes": true, 00:22:31.489 "zcopy": true, 00:22:31.489 "get_zone_info": false, 00:22:31.489 "zone_management": false, 00:22:31.489 "zone_append": false, 00:22:31.489 "compare": false, 00:22:31.489 "compare_and_write": false, 00:22:31.489 "abort": true, 00:22:31.489 "seek_hole": false, 00:22:31.489 "seek_data": false, 00:22:31.489 "copy": true, 00:22:31.489 "nvme_iov_md": false 00:22:31.489 }, 00:22:31.489 "memory_domains": [ 00:22:31.489 { 00:22:31.489 "dma_device_id": "system", 00:22:31.489 "dma_device_type": 1 00:22:31.489 }, 00:22:31.489 { 00:22:31.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.489 "dma_device_type": 2 00:22:31.489 } 00:22:31.489 ], 00:22:31.489 "driver_specific": {} 00:22:31.489 } 00:22:31.489 ] 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:31.489 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.490 "name": "Existed_Raid", 00:22:31.490 "uuid": "d1621d01-1e37-4d33-8010-fbfe9b21746b", 00:22:31.490 "strip_size_kb": 64, 00:22:31.490 "state": "configuring", 00:22:31.490 "raid_level": "raid5f", 00:22:31.490 "superblock": true, 00:22:31.490 "num_base_bdevs": 3, 00:22:31.490 "num_base_bdevs_discovered": 2, 00:22:31.490 "num_base_bdevs_operational": 3, 00:22:31.490 "base_bdevs_list": [ 00:22:31.490 { 00:22:31.490 "name": "BaseBdev1", 00:22:31.490 "uuid": "93c18b3a-e95e-4891-a009-45bc15394704", 00:22:31.490 "is_configured": true, 00:22:31.490 "data_offset": 2048, 00:22:31.490 "data_size": 63488 00:22:31.490 }, 00:22:31.490 { 00:22:31.490 "name": "BaseBdev2", 00:22:31.490 "uuid": "7e460245-5dd7-4c02-abdf-c860467afbe7", 00:22:31.490 "is_configured": true, 00:22:31.490 "data_offset": 2048, 00:22:31.490 "data_size": 63488 00:22:31.490 }, 00:22:31.490 { 00:22:31.490 "name": "BaseBdev3", 00:22:31.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.490 "is_configured": false, 00:22:31.490 "data_offset": 0, 00:22:31.490 "data_size": 0 00:22:31.490 } 00:22:31.490 ] 00:22:31.490 }' 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.490 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.821 [2024-11-20 05:33:03.590905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:31.821 [2024-11-20 05:33:03.591341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:31.821 [2024-11-20 05:33:03.591384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:31.821 BaseBdev3 00:22:31.821 [2024-11-20 05:33:03.591648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.821 [2024-11-20 05:33:03.595509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:31.821 [2024-11-20 05:33:03.595529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:31.821 [2024-11-20 05:33:03.595782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.821 [ 00:22:31.821 { 00:22:31.821 "name": "BaseBdev3", 00:22:31.821 "aliases": [ 00:22:31.821 "bd12b7f7-9c08-4dba-bb7f-0a2d38d2f327" 00:22:31.821 ], 00:22:31.821 "product_name": "Malloc disk", 00:22:31.821 "block_size": 512, 00:22:31.821 "num_blocks": 65536, 00:22:31.821 "uuid": "bd12b7f7-9c08-4dba-bb7f-0a2d38d2f327", 00:22:31.821 "assigned_rate_limits": { 00:22:31.821 "rw_ios_per_sec": 0, 00:22:31.821 "rw_mbytes_per_sec": 0, 00:22:31.821 "r_mbytes_per_sec": 0, 00:22:31.821 "w_mbytes_per_sec": 0 00:22:31.821 }, 00:22:31.821 "claimed": true, 00:22:31.821 "claim_type": "exclusive_write", 00:22:31.821 "zoned": false, 00:22:31.821 "supported_io_types": { 00:22:31.821 "read": true, 00:22:31.821 "write": true, 00:22:31.821 "unmap": true, 00:22:31.821 "flush": true, 00:22:31.821 "reset": true, 00:22:31.821 "nvme_admin": false, 00:22:31.821 "nvme_io": false, 00:22:31.821 "nvme_io_md": false, 00:22:31.821 "write_zeroes": true, 00:22:31.821 "zcopy": true, 00:22:31.821 "get_zone_info": false, 00:22:31.821 "zone_management": false, 00:22:31.821 "zone_append": false, 00:22:31.821 "compare": false, 00:22:31.821 "compare_and_write": false, 00:22:31.821 "abort": true, 00:22:31.821 "seek_hole": false, 00:22:31.821 "seek_data": false, 00:22:31.821 "copy": true, 00:22:31.821 "nvme_iov_md": false 00:22:31.821 }, 00:22:31.821 "memory_domains": [ 00:22:31.821 { 00:22:31.821 "dma_device_id": "system", 00:22:31.821 "dma_device_type": 1 00:22:31.821 }, 00:22:31.821 { 00:22:31.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.821 "dma_device_type": 2 00:22:31.821 } 00:22:31.821 ], 00:22:31.821 "driver_specific": {} 00:22:31.821 } 00:22:31.821 ] 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.821 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.081 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.081 "name": "Existed_Raid", 00:22:32.081 "uuid": "d1621d01-1e37-4d33-8010-fbfe9b21746b", 00:22:32.081 "strip_size_kb": 64, 00:22:32.081 "state": "online", 00:22:32.081 "raid_level": "raid5f", 00:22:32.081 "superblock": true, 00:22:32.081 "num_base_bdevs": 3, 00:22:32.081 "num_base_bdevs_discovered": 3, 00:22:32.081 "num_base_bdevs_operational": 3, 00:22:32.081 "base_bdevs_list": [ 00:22:32.081 { 00:22:32.081 "name": "BaseBdev1", 00:22:32.081 "uuid": "93c18b3a-e95e-4891-a009-45bc15394704", 00:22:32.081 "is_configured": true, 00:22:32.081 "data_offset": 2048, 00:22:32.081 "data_size": 63488 00:22:32.081 }, 00:22:32.081 { 00:22:32.081 "name": "BaseBdev2", 00:22:32.081 "uuid": "7e460245-5dd7-4c02-abdf-c860467afbe7", 00:22:32.081 "is_configured": true, 00:22:32.081 "data_offset": 2048, 00:22:32.081 "data_size": 63488 00:22:32.081 }, 00:22:32.081 { 00:22:32.081 "name": "BaseBdev3", 00:22:32.081 "uuid": "bd12b7f7-9c08-4dba-bb7f-0a2d38d2f327", 00:22:32.081 "is_configured": true, 00:22:32.081 "data_offset": 2048, 00:22:32.081 "data_size": 63488 00:22:32.081 } 00:22:32.081 ] 00:22:32.081 }' 00:22:32.081 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.081 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:32.340 [2024-11-20 05:33:03.948123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:32.340 "name": "Existed_Raid", 00:22:32.340 "aliases": [ 00:22:32.340 "d1621d01-1e37-4d33-8010-fbfe9b21746b" 00:22:32.340 ], 00:22:32.340 "product_name": "Raid Volume", 00:22:32.340 "block_size": 512, 00:22:32.340 "num_blocks": 126976, 00:22:32.340 "uuid": "d1621d01-1e37-4d33-8010-fbfe9b21746b", 00:22:32.340 "assigned_rate_limits": { 00:22:32.340 "rw_ios_per_sec": 0, 00:22:32.340 "rw_mbytes_per_sec": 0, 00:22:32.340 "r_mbytes_per_sec": 0, 00:22:32.340 "w_mbytes_per_sec": 0 00:22:32.340 }, 00:22:32.340 "claimed": false, 00:22:32.340 "zoned": false, 00:22:32.340 "supported_io_types": { 00:22:32.340 "read": true, 00:22:32.340 "write": true, 00:22:32.340 "unmap": false, 00:22:32.340 "flush": false, 00:22:32.340 "reset": true, 00:22:32.340 "nvme_admin": false, 00:22:32.340 "nvme_io": false, 00:22:32.340 "nvme_io_md": false, 00:22:32.340 "write_zeroes": true, 00:22:32.340 "zcopy": false, 00:22:32.340 "get_zone_info": false, 00:22:32.340 "zone_management": false, 00:22:32.340 "zone_append": false, 00:22:32.340 "compare": false, 00:22:32.340 "compare_and_write": false, 00:22:32.340 "abort": false, 00:22:32.340 "seek_hole": false, 00:22:32.340 "seek_data": false, 00:22:32.340 "copy": false, 00:22:32.340 "nvme_iov_md": false 00:22:32.340 }, 00:22:32.340 "driver_specific": { 00:22:32.340 "raid": { 00:22:32.340 "uuid": "d1621d01-1e37-4d33-8010-fbfe9b21746b", 00:22:32.340 "strip_size_kb": 64, 00:22:32.340 "state": "online", 00:22:32.340 "raid_level": "raid5f", 00:22:32.340 "superblock": true, 00:22:32.340 "num_base_bdevs": 3, 00:22:32.340 "num_base_bdevs_discovered": 3, 00:22:32.340 "num_base_bdevs_operational": 3, 00:22:32.340 "base_bdevs_list": [ 00:22:32.340 { 00:22:32.340 "name": "BaseBdev1", 00:22:32.340 "uuid": "93c18b3a-e95e-4891-a009-45bc15394704", 00:22:32.340 "is_configured": true, 00:22:32.340 "data_offset": 2048, 00:22:32.340 "data_size": 63488 00:22:32.340 }, 00:22:32.340 { 00:22:32.340 "name": "BaseBdev2", 00:22:32.340 "uuid": "7e460245-5dd7-4c02-abdf-c860467afbe7", 00:22:32.340 "is_configured": true, 00:22:32.340 "data_offset": 2048, 00:22:32.340 "data_size": 63488 00:22:32.340 }, 00:22:32.340 { 00:22:32.340 "name": "BaseBdev3", 00:22:32.340 "uuid": "bd12b7f7-9c08-4dba-bb7f-0a2d38d2f327", 00:22:32.340 "is_configured": true, 00:22:32.340 "data_offset": 2048, 00:22:32.340 "data_size": 63488 00:22:32.340 } 00:22:32.340 ] 00:22:32.340 } 00:22:32.340 } 00:22:32.340 }' 00:22:32.340 05:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:32.340 BaseBdev2 00:22:32.340 BaseBdev3' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.340 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.340 [2024-11-20 05:33:04.151998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.601 "name": "Existed_Raid", 00:22:32.601 "uuid": "d1621d01-1e37-4d33-8010-fbfe9b21746b", 00:22:32.601 "strip_size_kb": 64, 00:22:32.601 "state": "online", 00:22:32.601 "raid_level": "raid5f", 00:22:32.601 "superblock": true, 00:22:32.601 "num_base_bdevs": 3, 00:22:32.601 "num_base_bdevs_discovered": 2, 00:22:32.601 "num_base_bdevs_operational": 2, 00:22:32.601 "base_bdevs_list": [ 00:22:32.601 { 00:22:32.601 "name": null, 00:22:32.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.601 "is_configured": false, 00:22:32.601 "data_offset": 0, 00:22:32.601 "data_size": 63488 00:22:32.601 }, 00:22:32.601 { 00:22:32.601 "name": "BaseBdev2", 00:22:32.601 "uuid": "7e460245-5dd7-4c02-abdf-c860467afbe7", 00:22:32.601 "is_configured": true, 00:22:32.601 "data_offset": 2048, 00:22:32.601 "data_size": 63488 00:22:32.601 }, 00:22:32.601 { 00:22:32.601 "name": "BaseBdev3", 00:22:32.601 "uuid": "bd12b7f7-9c08-4dba-bb7f-0a2d38d2f327", 00:22:32.601 "is_configured": true, 00:22:32.601 "data_offset": 2048, 00:22:32.601 "data_size": 63488 00:22:32.601 } 00:22:32.601 ] 00:22:32.601 }' 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.601 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.860 [2024-11-20 05:33:04.561559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:32.860 [2024-11-20 05:33:04.561691] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:32.860 [2024-11-20 05:33:04.623426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.860 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.860 [2024-11-20 05:33:04.663471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:32.860 [2024-11-20 05:33:04.663517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.119 BaseBdev2 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:33.119 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.120 [ 00:22:33.120 { 00:22:33.120 "name": "BaseBdev2", 00:22:33.120 "aliases": [ 00:22:33.120 "158c2b4e-be94-485c-9f98-f967da0b065c" 00:22:33.120 ], 00:22:33.120 "product_name": "Malloc disk", 00:22:33.120 "block_size": 512, 00:22:33.120 "num_blocks": 65536, 00:22:33.120 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:33.120 "assigned_rate_limits": { 00:22:33.120 "rw_ios_per_sec": 0, 00:22:33.120 "rw_mbytes_per_sec": 0, 00:22:33.120 "r_mbytes_per_sec": 0, 00:22:33.120 "w_mbytes_per_sec": 0 00:22:33.120 }, 00:22:33.120 "claimed": false, 00:22:33.120 "zoned": false, 00:22:33.120 "supported_io_types": { 00:22:33.120 "read": true, 00:22:33.120 "write": true, 00:22:33.120 "unmap": true, 00:22:33.120 "flush": true, 00:22:33.120 "reset": true, 00:22:33.120 "nvme_admin": false, 00:22:33.120 "nvme_io": false, 00:22:33.120 "nvme_io_md": false, 00:22:33.120 "write_zeroes": true, 00:22:33.120 "zcopy": true, 00:22:33.120 "get_zone_info": false, 00:22:33.120 "zone_management": false, 00:22:33.120 "zone_append": false, 00:22:33.120 "compare": false, 00:22:33.120 "compare_and_write": false, 00:22:33.120 "abort": true, 00:22:33.120 "seek_hole": false, 00:22:33.120 "seek_data": false, 00:22:33.120 "copy": true, 00:22:33.120 "nvme_iov_md": false 00:22:33.120 }, 00:22:33.120 "memory_domains": [ 00:22:33.120 { 00:22:33.120 "dma_device_id": "system", 00:22:33.120 "dma_device_type": 1 00:22:33.120 }, 00:22:33.120 { 00:22:33.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.120 "dma_device_type": 2 00:22:33.120 } 00:22:33.120 ], 00:22:33.120 "driver_specific": {} 00:22:33.120 } 00:22:33.120 ] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.120 BaseBdev3 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.120 [ 00:22:33.120 { 00:22:33.120 "name": "BaseBdev3", 00:22:33.120 "aliases": [ 00:22:33.120 "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1" 00:22:33.120 ], 00:22:33.120 "product_name": "Malloc disk", 00:22:33.120 "block_size": 512, 00:22:33.120 "num_blocks": 65536, 00:22:33.120 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:33.120 "assigned_rate_limits": { 00:22:33.120 "rw_ios_per_sec": 0, 00:22:33.120 "rw_mbytes_per_sec": 0, 00:22:33.120 "r_mbytes_per_sec": 0, 00:22:33.120 "w_mbytes_per_sec": 0 00:22:33.120 }, 00:22:33.120 "claimed": false, 00:22:33.120 "zoned": false, 00:22:33.120 "supported_io_types": { 00:22:33.120 "read": true, 00:22:33.120 "write": true, 00:22:33.120 "unmap": true, 00:22:33.120 "flush": true, 00:22:33.120 "reset": true, 00:22:33.120 "nvme_admin": false, 00:22:33.120 "nvme_io": false, 00:22:33.120 "nvme_io_md": false, 00:22:33.120 "write_zeroes": true, 00:22:33.120 "zcopy": true, 00:22:33.120 "get_zone_info": false, 00:22:33.120 "zone_management": false, 00:22:33.120 "zone_append": false, 00:22:33.120 "compare": false, 00:22:33.120 "compare_and_write": false, 00:22:33.120 "abort": true, 00:22:33.120 "seek_hole": false, 00:22:33.120 "seek_data": false, 00:22:33.120 "copy": true, 00:22:33.120 "nvme_iov_md": false 00:22:33.120 }, 00:22:33.120 "memory_domains": [ 00:22:33.120 { 00:22:33.120 "dma_device_id": "system", 00:22:33.120 "dma_device_type": 1 00:22:33.120 }, 00:22:33.120 { 00:22:33.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.120 "dma_device_type": 2 00:22:33.120 } 00:22:33.120 ], 00:22:33.120 "driver_specific": {} 00:22:33.120 } 00:22:33.120 ] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.120 [2024-11-20 05:33:04.872690] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:33.120 [2024-11-20 05:33:04.872848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:33.120 [2024-11-20 05:33:04.872925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:33.120 [2024-11-20 05:33:04.875194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.120 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.120 "name": "Existed_Raid", 00:22:33.120 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:33.120 "strip_size_kb": 64, 00:22:33.120 "state": "configuring", 00:22:33.120 "raid_level": "raid5f", 00:22:33.120 "superblock": true, 00:22:33.120 "num_base_bdevs": 3, 00:22:33.120 "num_base_bdevs_discovered": 2, 00:22:33.120 "num_base_bdevs_operational": 3, 00:22:33.120 "base_bdevs_list": [ 00:22:33.120 { 00:22:33.120 "name": "BaseBdev1", 00:22:33.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.120 "is_configured": false, 00:22:33.120 "data_offset": 0, 00:22:33.120 "data_size": 0 00:22:33.120 }, 00:22:33.120 { 00:22:33.120 "name": "BaseBdev2", 00:22:33.120 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:33.120 "is_configured": true, 00:22:33.120 "data_offset": 2048, 00:22:33.120 "data_size": 63488 00:22:33.120 }, 00:22:33.120 { 00:22:33.121 "name": "BaseBdev3", 00:22:33.121 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:33.121 "is_configured": true, 00:22:33.121 "data_offset": 2048, 00:22:33.121 "data_size": 63488 00:22:33.121 } 00:22:33.121 ] 00:22:33.121 }' 00:22:33.121 05:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.121 05:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.379 [2024-11-20 05:33:05.160717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.379 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.380 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.380 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.380 "name": "Existed_Raid", 00:22:33.380 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:33.380 "strip_size_kb": 64, 00:22:33.380 "state": "configuring", 00:22:33.380 "raid_level": "raid5f", 00:22:33.380 "superblock": true, 00:22:33.380 "num_base_bdevs": 3, 00:22:33.380 "num_base_bdevs_discovered": 1, 00:22:33.380 "num_base_bdevs_operational": 3, 00:22:33.380 "base_bdevs_list": [ 00:22:33.380 { 00:22:33.380 "name": "BaseBdev1", 00:22:33.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.380 "is_configured": false, 00:22:33.380 "data_offset": 0, 00:22:33.380 "data_size": 0 00:22:33.380 }, 00:22:33.380 { 00:22:33.380 "name": null, 00:22:33.380 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:33.380 "is_configured": false, 00:22:33.380 "data_offset": 0, 00:22:33.380 "data_size": 63488 00:22:33.380 }, 00:22:33.380 { 00:22:33.380 "name": "BaseBdev3", 00:22:33.380 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:33.380 "is_configured": true, 00:22:33.380 "data_offset": 2048, 00:22:33.380 "data_size": 63488 00:22:33.380 } 00:22:33.380 ] 00:22:33.380 }' 00:22:33.380 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.380 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.946 [2024-11-20 05:33:05.535226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:33.946 BaseBdev1 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.946 [ 00:22:33.946 { 00:22:33.946 "name": "BaseBdev1", 00:22:33.946 "aliases": [ 00:22:33.946 "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095" 00:22:33.946 ], 00:22:33.946 "product_name": "Malloc disk", 00:22:33.946 "block_size": 512, 00:22:33.946 "num_blocks": 65536, 00:22:33.946 "uuid": "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095", 00:22:33.946 "assigned_rate_limits": { 00:22:33.946 "rw_ios_per_sec": 0, 00:22:33.946 "rw_mbytes_per_sec": 0, 00:22:33.946 "r_mbytes_per_sec": 0, 00:22:33.946 "w_mbytes_per_sec": 0 00:22:33.946 }, 00:22:33.946 "claimed": true, 00:22:33.946 "claim_type": "exclusive_write", 00:22:33.946 "zoned": false, 00:22:33.946 "supported_io_types": { 00:22:33.946 "read": true, 00:22:33.946 "write": true, 00:22:33.946 "unmap": true, 00:22:33.946 "flush": true, 00:22:33.946 "reset": true, 00:22:33.946 "nvme_admin": false, 00:22:33.946 "nvme_io": false, 00:22:33.946 "nvme_io_md": false, 00:22:33.946 "write_zeroes": true, 00:22:33.946 "zcopy": true, 00:22:33.946 "get_zone_info": false, 00:22:33.946 "zone_management": false, 00:22:33.946 "zone_append": false, 00:22:33.946 "compare": false, 00:22:33.946 "compare_and_write": false, 00:22:33.946 "abort": true, 00:22:33.946 "seek_hole": false, 00:22:33.946 "seek_data": false, 00:22:33.946 "copy": true, 00:22:33.946 "nvme_iov_md": false 00:22:33.946 }, 00:22:33.946 "memory_domains": [ 00:22:33.946 { 00:22:33.946 "dma_device_id": "system", 00:22:33.946 "dma_device_type": 1 00:22:33.946 }, 00:22:33.946 { 00:22:33.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.946 "dma_device_type": 2 00:22:33.946 } 00:22:33.946 ], 00:22:33.946 "driver_specific": {} 00:22:33.946 } 00:22:33.946 ] 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.946 "name": "Existed_Raid", 00:22:33.946 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:33.946 "strip_size_kb": 64, 00:22:33.946 "state": "configuring", 00:22:33.946 "raid_level": "raid5f", 00:22:33.946 "superblock": true, 00:22:33.946 "num_base_bdevs": 3, 00:22:33.946 "num_base_bdevs_discovered": 2, 00:22:33.946 "num_base_bdevs_operational": 3, 00:22:33.946 "base_bdevs_list": [ 00:22:33.946 { 00:22:33.946 "name": "BaseBdev1", 00:22:33.946 "uuid": "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095", 00:22:33.946 "is_configured": true, 00:22:33.946 "data_offset": 2048, 00:22:33.946 "data_size": 63488 00:22:33.946 }, 00:22:33.946 { 00:22:33.946 "name": null, 00:22:33.946 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:33.946 "is_configured": false, 00:22:33.946 "data_offset": 0, 00:22:33.946 "data_size": 63488 00:22:33.946 }, 00:22:33.946 { 00:22:33.946 "name": "BaseBdev3", 00:22:33.946 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:33.946 "is_configured": true, 00:22:33.946 "data_offset": 2048, 00:22:33.946 "data_size": 63488 00:22:33.946 } 00:22:33.946 ] 00:22:33.946 }' 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.946 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.205 [2024-11-20 05:33:05.923342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.205 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.205 "name": "Existed_Raid", 00:22:34.205 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:34.205 "strip_size_kb": 64, 00:22:34.205 "state": "configuring", 00:22:34.205 "raid_level": "raid5f", 00:22:34.205 "superblock": true, 00:22:34.205 "num_base_bdevs": 3, 00:22:34.205 "num_base_bdevs_discovered": 1, 00:22:34.205 "num_base_bdevs_operational": 3, 00:22:34.205 "base_bdevs_list": [ 00:22:34.205 { 00:22:34.205 "name": "BaseBdev1", 00:22:34.205 "uuid": "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095", 00:22:34.205 "is_configured": true, 00:22:34.205 "data_offset": 2048, 00:22:34.205 "data_size": 63488 00:22:34.205 }, 00:22:34.205 { 00:22:34.206 "name": null, 00:22:34.206 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:34.206 "is_configured": false, 00:22:34.206 "data_offset": 0, 00:22:34.206 "data_size": 63488 00:22:34.206 }, 00:22:34.206 { 00:22:34.206 "name": null, 00:22:34.206 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:34.206 "is_configured": false, 00:22:34.206 "data_offset": 0, 00:22:34.206 "data_size": 63488 00:22:34.206 } 00:22:34.206 ] 00:22:34.206 }' 00:22:34.206 05:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.206 05:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.463 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:34.463 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.463 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.463 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.463 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.463 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:34.463 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:34.463 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.464 [2024-11-20 05:33:06.267436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.464 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.722 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.722 "name": "Existed_Raid", 00:22:34.722 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:34.722 "strip_size_kb": 64, 00:22:34.722 "state": "configuring", 00:22:34.722 "raid_level": "raid5f", 00:22:34.722 "superblock": true, 00:22:34.722 "num_base_bdevs": 3, 00:22:34.722 "num_base_bdevs_discovered": 2, 00:22:34.722 "num_base_bdevs_operational": 3, 00:22:34.722 "base_bdevs_list": [ 00:22:34.722 { 00:22:34.722 "name": "BaseBdev1", 00:22:34.722 "uuid": "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095", 00:22:34.722 "is_configured": true, 00:22:34.722 "data_offset": 2048, 00:22:34.722 "data_size": 63488 00:22:34.722 }, 00:22:34.722 { 00:22:34.722 "name": null, 00:22:34.722 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:34.722 "is_configured": false, 00:22:34.722 "data_offset": 0, 00:22:34.722 "data_size": 63488 00:22:34.722 }, 00:22:34.722 { 00:22:34.722 "name": "BaseBdev3", 00:22:34.722 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:34.722 "is_configured": true, 00:22:34.722 "data_offset": 2048, 00:22:34.722 "data_size": 63488 00:22:34.722 } 00:22:34.722 ] 00:22:34.722 }' 00:22:34.722 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.722 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.980 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.980 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.981 [2024-11-20 05:33:06.631530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.981 "name": "Existed_Raid", 00:22:34.981 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:34.981 "strip_size_kb": 64, 00:22:34.981 "state": "configuring", 00:22:34.981 "raid_level": "raid5f", 00:22:34.981 "superblock": true, 00:22:34.981 "num_base_bdevs": 3, 00:22:34.981 "num_base_bdevs_discovered": 1, 00:22:34.981 "num_base_bdevs_operational": 3, 00:22:34.981 "base_bdevs_list": [ 00:22:34.981 { 00:22:34.981 "name": null, 00:22:34.981 "uuid": "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095", 00:22:34.981 "is_configured": false, 00:22:34.981 "data_offset": 0, 00:22:34.981 "data_size": 63488 00:22:34.981 }, 00:22:34.981 { 00:22:34.981 "name": null, 00:22:34.981 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:34.981 "is_configured": false, 00:22:34.981 "data_offset": 0, 00:22:34.981 "data_size": 63488 00:22:34.981 }, 00:22:34.981 { 00:22:34.981 "name": "BaseBdev3", 00:22:34.981 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:34.981 "is_configured": true, 00:22:34.981 "data_offset": 2048, 00:22:34.981 "data_size": 63488 00:22:34.981 } 00:22:34.981 ] 00:22:34.981 }' 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.981 05:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.239 [2024-11-20 05:33:07.033510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.239 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.497 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.497 "name": "Existed_Raid", 00:22:35.497 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:35.497 "strip_size_kb": 64, 00:22:35.497 "state": "configuring", 00:22:35.497 "raid_level": "raid5f", 00:22:35.497 "superblock": true, 00:22:35.497 "num_base_bdevs": 3, 00:22:35.497 "num_base_bdevs_discovered": 2, 00:22:35.497 "num_base_bdevs_operational": 3, 00:22:35.497 "base_bdevs_list": [ 00:22:35.497 { 00:22:35.497 "name": null, 00:22:35.497 "uuid": "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095", 00:22:35.497 "is_configured": false, 00:22:35.497 "data_offset": 0, 00:22:35.497 "data_size": 63488 00:22:35.497 }, 00:22:35.497 { 00:22:35.497 "name": "BaseBdev2", 00:22:35.497 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:35.497 "is_configured": true, 00:22:35.497 "data_offset": 2048, 00:22:35.497 "data_size": 63488 00:22:35.497 }, 00:22:35.497 { 00:22:35.497 "name": "BaseBdev3", 00:22:35.497 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:35.497 "is_configured": true, 00:22:35.497 "data_offset": 2048, 00:22:35.497 "data_size": 63488 00:22:35.497 } 00:22:35.497 ] 00:22:35.497 }' 00:22:35.497 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.497 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.755 [2024-11-20 05:33:07.411745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:35.755 [2024-11-20 05:33:07.411919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:35.755 [2024-11-20 05:33:07.411932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:35.755 NewBaseBdev 00:22:35.755 [2024-11-20 05:33:07.412124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:35.755 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.756 [2024-11-20 05:33:07.414949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:35.756 [2024-11-20 05:33:07.414964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:35.756 [2024-11-20 05:33:07.415067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.756 [ 00:22:35.756 { 00:22:35.756 "name": "NewBaseBdev", 00:22:35.756 "aliases": [ 00:22:35.756 "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095" 00:22:35.756 ], 00:22:35.756 "product_name": "Malloc disk", 00:22:35.756 "block_size": 512, 00:22:35.756 "num_blocks": 65536, 00:22:35.756 "uuid": "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095", 00:22:35.756 "assigned_rate_limits": { 00:22:35.756 "rw_ios_per_sec": 0, 00:22:35.756 "rw_mbytes_per_sec": 0, 00:22:35.756 "r_mbytes_per_sec": 0, 00:22:35.756 "w_mbytes_per_sec": 0 00:22:35.756 }, 00:22:35.756 "claimed": true, 00:22:35.756 "claim_type": "exclusive_write", 00:22:35.756 "zoned": false, 00:22:35.756 "supported_io_types": { 00:22:35.756 "read": true, 00:22:35.756 "write": true, 00:22:35.756 "unmap": true, 00:22:35.756 "flush": true, 00:22:35.756 "reset": true, 00:22:35.756 "nvme_admin": false, 00:22:35.756 "nvme_io": false, 00:22:35.756 "nvme_io_md": false, 00:22:35.756 "write_zeroes": true, 00:22:35.756 "zcopy": true, 00:22:35.756 "get_zone_info": false, 00:22:35.756 "zone_management": false, 00:22:35.756 "zone_append": false, 00:22:35.756 "compare": false, 00:22:35.756 "compare_and_write": false, 00:22:35.756 "abort": true, 00:22:35.756 "seek_hole": false, 00:22:35.756 "seek_data": false, 00:22:35.756 "copy": true, 00:22:35.756 "nvme_iov_md": false 00:22:35.756 }, 00:22:35.756 "memory_domains": [ 00:22:35.756 { 00:22:35.756 "dma_device_id": "system", 00:22:35.756 "dma_device_type": 1 00:22:35.756 }, 00:22:35.756 { 00:22:35.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.756 "dma_device_type": 2 00:22:35.756 } 00:22:35.756 ], 00:22:35.756 "driver_specific": {} 00:22:35.756 } 00:22:35.756 ] 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.756 "name": "Existed_Raid", 00:22:35.756 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:35.756 "strip_size_kb": 64, 00:22:35.756 "state": "online", 00:22:35.756 "raid_level": "raid5f", 00:22:35.756 "superblock": true, 00:22:35.756 "num_base_bdevs": 3, 00:22:35.756 "num_base_bdevs_discovered": 3, 00:22:35.756 "num_base_bdevs_operational": 3, 00:22:35.756 "base_bdevs_list": [ 00:22:35.756 { 00:22:35.756 "name": "NewBaseBdev", 00:22:35.756 "uuid": "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095", 00:22:35.756 "is_configured": true, 00:22:35.756 "data_offset": 2048, 00:22:35.756 "data_size": 63488 00:22:35.756 }, 00:22:35.756 { 00:22:35.756 "name": "BaseBdev2", 00:22:35.756 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:35.756 "is_configured": true, 00:22:35.756 "data_offset": 2048, 00:22:35.756 "data_size": 63488 00:22:35.756 }, 00:22:35.756 { 00:22:35.756 "name": "BaseBdev3", 00:22:35.756 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:35.756 "is_configured": true, 00:22:35.756 "data_offset": 2048, 00:22:35.756 "data_size": 63488 00:22:35.756 } 00:22:35.756 ] 00:22:35.756 }' 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.756 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.014 [2024-11-20 05:33:07.754534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.014 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:36.014 "name": "Existed_Raid", 00:22:36.014 "aliases": [ 00:22:36.014 "7aeb2f39-fffc-48d6-87f0-b69f17d207a5" 00:22:36.014 ], 00:22:36.014 "product_name": "Raid Volume", 00:22:36.014 "block_size": 512, 00:22:36.014 "num_blocks": 126976, 00:22:36.014 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:36.014 "assigned_rate_limits": { 00:22:36.014 "rw_ios_per_sec": 0, 00:22:36.014 "rw_mbytes_per_sec": 0, 00:22:36.014 "r_mbytes_per_sec": 0, 00:22:36.014 "w_mbytes_per_sec": 0 00:22:36.014 }, 00:22:36.014 "claimed": false, 00:22:36.014 "zoned": false, 00:22:36.014 "supported_io_types": { 00:22:36.014 "read": true, 00:22:36.014 "write": true, 00:22:36.014 "unmap": false, 00:22:36.014 "flush": false, 00:22:36.014 "reset": true, 00:22:36.014 "nvme_admin": false, 00:22:36.014 "nvme_io": false, 00:22:36.014 "nvme_io_md": false, 00:22:36.014 "write_zeroes": true, 00:22:36.014 "zcopy": false, 00:22:36.014 "get_zone_info": false, 00:22:36.014 "zone_management": false, 00:22:36.014 "zone_append": false, 00:22:36.014 "compare": false, 00:22:36.014 "compare_and_write": false, 00:22:36.014 "abort": false, 00:22:36.014 "seek_hole": false, 00:22:36.014 "seek_data": false, 00:22:36.014 "copy": false, 00:22:36.014 "nvme_iov_md": false 00:22:36.014 }, 00:22:36.014 "driver_specific": { 00:22:36.014 "raid": { 00:22:36.014 "uuid": "7aeb2f39-fffc-48d6-87f0-b69f17d207a5", 00:22:36.014 "strip_size_kb": 64, 00:22:36.014 "state": "online", 00:22:36.014 "raid_level": "raid5f", 00:22:36.014 "superblock": true, 00:22:36.014 "num_base_bdevs": 3, 00:22:36.014 "num_base_bdevs_discovered": 3, 00:22:36.014 "num_base_bdevs_operational": 3, 00:22:36.014 "base_bdevs_list": [ 00:22:36.014 { 00:22:36.014 "name": "NewBaseBdev", 00:22:36.014 "uuid": "2e85bb0c-80cf-44bf-bf4a-cc3c0c15a095", 00:22:36.014 "is_configured": true, 00:22:36.014 "data_offset": 2048, 00:22:36.014 "data_size": 63488 00:22:36.014 }, 00:22:36.014 { 00:22:36.014 "name": "BaseBdev2", 00:22:36.014 "uuid": "158c2b4e-be94-485c-9f98-f967da0b065c", 00:22:36.014 "is_configured": true, 00:22:36.014 "data_offset": 2048, 00:22:36.014 "data_size": 63488 00:22:36.014 }, 00:22:36.014 { 00:22:36.014 "name": "BaseBdev3", 00:22:36.015 "uuid": "44e21ec9-b45b-4bbf-a989-7e8b89d8d3d1", 00:22:36.015 "is_configured": true, 00:22:36.015 "data_offset": 2048, 00:22:36.015 "data_size": 63488 00:22:36.015 } 00:22:36.015 ] 00:22:36.015 } 00:22:36.015 } 00:22:36.015 }' 00:22:36.015 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:36.015 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:36.015 BaseBdev2 00:22:36.015 BaseBdev3' 00:22:36.015 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.015 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:36.015 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.015 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:36.015 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.015 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.015 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.273 [2024-11-20 05:33:07.942406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:36.273 [2024-11-20 05:33:07.942429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.273 [2024-11-20 05:33:07.942493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.273 [2024-11-20 05:33:07.942713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.273 [2024-11-20 05:33:07.942737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78359 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78359 ']' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 78359 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78359 00:22:36.273 killing process with pid 78359 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78359' 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 78359 00:22:36.273 [2024-11-20 05:33:07.973123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:36.273 05:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 78359 00:22:36.530 [2024-11-20 05:33:08.121603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:37.096 ************************************ 00:22:37.096 END TEST raid5f_state_function_test_sb 00:22:37.096 ************************************ 00:22:37.096 05:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:37.096 00:22:37.096 real 0m7.365s 00:22:37.096 user 0m11.725s 00:22:37.096 sys 0m1.337s 00:22:37.096 05:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:37.096 05:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.096 05:33:08 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:37.096 05:33:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:37.096 05:33:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:37.096 05:33:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.096 ************************************ 00:22:37.096 START TEST raid5f_superblock_test 00:22:37.096 ************************************ 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78946 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78946 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 78946 ']' 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:37.096 05:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.096 [2024-11-20 05:33:08.808458] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:22:37.096 [2024-11-20 05:33:08.808584] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78946 ] 00:22:37.394 [2024-11-20 05:33:08.964848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.394 [2024-11-20 05:33:09.048857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.394 [2024-11-20 05:33:09.159153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:37.394 [2024-11-20 05:33:09.159191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.973 malloc1 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.973 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.974 [2024-11-20 05:33:09.693788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:37.974 [2024-11-20 05:33:09.693842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.974 [2024-11-20 05:33:09.693857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:37.974 [2024-11-20 05:33:09.693865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.974 [2024-11-20 05:33:09.695618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.974 [2024-11-20 05:33:09.695648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:37.974 pt1 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.974 malloc2 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.974 [2024-11-20 05:33:09.725419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:37.974 [2024-11-20 05:33:09.725464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.974 [2024-11-20 05:33:09.725481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:37.974 [2024-11-20 05:33:09.725487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.974 [2024-11-20 05:33:09.727178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.974 [2024-11-20 05:33:09.727207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:37.974 pt2 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.974 malloc3 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.974 [2024-11-20 05:33:09.769984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:37.974 [2024-11-20 05:33:09.770033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.974 [2024-11-20 05:33:09.770051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:37.974 [2024-11-20 05:33:09.770058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.974 [2024-11-20 05:33:09.771795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.974 [2024-11-20 05:33:09.771825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:37.974 pt3 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.974 [2024-11-20 05:33:09.778027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:37.974 [2024-11-20 05:33:09.779585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:37.974 [2024-11-20 05:33:09.779643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:37.974 [2024-11-20 05:33:09.779779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:37.974 [2024-11-20 05:33:09.779796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:37.974 [2024-11-20 05:33:09.780029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:37.974 [2024-11-20 05:33:09.783192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:37.974 [2024-11-20 05:33:09.783276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:37.974 [2024-11-20 05:33:09.783508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.974 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.232 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.232 "name": "raid_bdev1", 00:22:38.232 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:38.232 "strip_size_kb": 64, 00:22:38.232 "state": "online", 00:22:38.232 "raid_level": "raid5f", 00:22:38.232 "superblock": true, 00:22:38.232 "num_base_bdevs": 3, 00:22:38.232 "num_base_bdevs_discovered": 3, 00:22:38.232 "num_base_bdevs_operational": 3, 00:22:38.232 "base_bdevs_list": [ 00:22:38.232 { 00:22:38.232 "name": "pt1", 00:22:38.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:38.232 "is_configured": true, 00:22:38.232 "data_offset": 2048, 00:22:38.232 "data_size": 63488 00:22:38.232 }, 00:22:38.232 { 00:22:38.232 "name": "pt2", 00:22:38.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:38.232 "is_configured": true, 00:22:38.232 "data_offset": 2048, 00:22:38.232 "data_size": 63488 00:22:38.232 }, 00:22:38.232 { 00:22:38.232 "name": "pt3", 00:22:38.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:38.232 "is_configured": true, 00:22:38.232 "data_offset": 2048, 00:22:38.232 "data_size": 63488 00:22:38.232 } 00:22:38.232 ] 00:22:38.232 }' 00:22:38.232 05:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.232 05:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.490 [2024-11-20 05:33:10.111698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:38.490 "name": "raid_bdev1", 00:22:38.490 "aliases": [ 00:22:38.490 "c426ac0f-4616-4a37-820c-68bf47f4ef3d" 00:22:38.490 ], 00:22:38.490 "product_name": "Raid Volume", 00:22:38.490 "block_size": 512, 00:22:38.490 "num_blocks": 126976, 00:22:38.490 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:38.490 "assigned_rate_limits": { 00:22:38.490 "rw_ios_per_sec": 0, 00:22:38.490 "rw_mbytes_per_sec": 0, 00:22:38.490 "r_mbytes_per_sec": 0, 00:22:38.490 "w_mbytes_per_sec": 0 00:22:38.490 }, 00:22:38.490 "claimed": false, 00:22:38.490 "zoned": false, 00:22:38.490 "supported_io_types": { 00:22:38.490 "read": true, 00:22:38.490 "write": true, 00:22:38.490 "unmap": false, 00:22:38.490 "flush": false, 00:22:38.490 "reset": true, 00:22:38.490 "nvme_admin": false, 00:22:38.490 "nvme_io": false, 00:22:38.490 "nvme_io_md": false, 00:22:38.490 "write_zeroes": true, 00:22:38.490 "zcopy": false, 00:22:38.490 "get_zone_info": false, 00:22:38.490 "zone_management": false, 00:22:38.490 "zone_append": false, 00:22:38.490 "compare": false, 00:22:38.490 "compare_and_write": false, 00:22:38.490 "abort": false, 00:22:38.490 "seek_hole": false, 00:22:38.490 "seek_data": false, 00:22:38.490 "copy": false, 00:22:38.490 "nvme_iov_md": false 00:22:38.490 }, 00:22:38.490 "driver_specific": { 00:22:38.490 "raid": { 00:22:38.490 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:38.490 "strip_size_kb": 64, 00:22:38.490 "state": "online", 00:22:38.490 "raid_level": "raid5f", 00:22:38.490 "superblock": true, 00:22:38.490 "num_base_bdevs": 3, 00:22:38.490 "num_base_bdevs_discovered": 3, 00:22:38.490 "num_base_bdevs_operational": 3, 00:22:38.490 "base_bdevs_list": [ 00:22:38.490 { 00:22:38.490 "name": "pt1", 00:22:38.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:38.490 "is_configured": true, 00:22:38.490 "data_offset": 2048, 00:22:38.490 "data_size": 63488 00:22:38.490 }, 00:22:38.490 { 00:22:38.490 "name": "pt2", 00:22:38.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:38.490 "is_configured": true, 00:22:38.490 "data_offset": 2048, 00:22:38.490 "data_size": 63488 00:22:38.490 }, 00:22:38.490 { 00:22:38.490 "name": "pt3", 00:22:38.490 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:38.490 "is_configured": true, 00:22:38.490 "data_offset": 2048, 00:22:38.490 "data_size": 63488 00:22:38.490 } 00:22:38.490 ] 00:22:38.490 } 00:22:38.490 } 00:22:38.490 }' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:38.490 pt2 00:22:38.490 pt3' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:38.490 [2024-11-20 05:33:10.311713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:38.490 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c426ac0f-4616-4a37-820c-68bf47f4ef3d 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c426ac0f-4616-4a37-820c-68bf47f4ef3d ']' 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.749 [2024-11-20 05:33:10.347570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:38.749 [2024-11-20 05:33:10.347593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:38.749 [2024-11-20 05:33:10.347656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:38.749 [2024-11-20 05:33:10.347721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:38.749 [2024-11-20 05:33:10.347729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.749 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.750 [2024-11-20 05:33:10.451637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:38.750 [2024-11-20 05:33:10.453216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:38.750 [2024-11-20 05:33:10.453256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:38.750 [2024-11-20 05:33:10.453296] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:38.750 [2024-11-20 05:33:10.453341] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:38.750 [2024-11-20 05:33:10.453357] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:38.750 [2024-11-20 05:33:10.453473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:38.750 [2024-11-20 05:33:10.453499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:38.750 request: 00:22:38.750 { 00:22:38.750 "name": "raid_bdev1", 00:22:38.750 "raid_level": "raid5f", 00:22:38.750 "base_bdevs": [ 00:22:38.750 "malloc1", 00:22:38.750 "malloc2", 00:22:38.750 "malloc3" 00:22:38.750 ], 00:22:38.750 "strip_size_kb": 64, 00:22:38.750 "superblock": false, 00:22:38.750 "method": "bdev_raid_create", 00:22:38.750 "req_id": 1 00:22:38.750 } 00:22:38.750 Got JSON-RPC error response 00:22:38.750 response: 00:22:38.750 { 00:22:38.750 "code": -17, 00:22:38.750 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:38.750 } 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.750 [2024-11-20 05:33:10.499594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:38.750 [2024-11-20 05:33:10.499641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.750 [2024-11-20 05:33:10.499656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:38.750 [2024-11-20 05:33:10.499664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.750 [2024-11-20 05:33:10.501484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.750 [2024-11-20 05:33:10.501511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:38.750 [2024-11-20 05:33:10.501573] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:38.750 [2024-11-20 05:33:10.501611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:38.750 pt1 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.750 "name": "raid_bdev1", 00:22:38.750 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:38.750 "strip_size_kb": 64, 00:22:38.750 "state": "configuring", 00:22:38.750 "raid_level": "raid5f", 00:22:38.750 "superblock": true, 00:22:38.750 "num_base_bdevs": 3, 00:22:38.750 "num_base_bdevs_discovered": 1, 00:22:38.750 "num_base_bdevs_operational": 3, 00:22:38.750 "base_bdevs_list": [ 00:22:38.750 { 00:22:38.750 "name": "pt1", 00:22:38.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:38.750 "is_configured": true, 00:22:38.750 "data_offset": 2048, 00:22:38.750 "data_size": 63488 00:22:38.750 }, 00:22:38.750 { 00:22:38.750 "name": null, 00:22:38.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:38.750 "is_configured": false, 00:22:38.750 "data_offset": 2048, 00:22:38.750 "data_size": 63488 00:22:38.750 }, 00:22:38.750 { 00:22:38.750 "name": null, 00:22:38.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:38.750 "is_configured": false, 00:22:38.750 "data_offset": 2048, 00:22:38.750 "data_size": 63488 00:22:38.750 } 00:22:38.750 ] 00:22:38.750 }' 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.750 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.009 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:22:39.009 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:39.009 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.009 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.009 [2024-11-20 05:33:10.831671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:39.009 [2024-11-20 05:33:10.831724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.009 [2024-11-20 05:33:10.831740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:39.009 [2024-11-20 05:33:10.831747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.009 [2024-11-20 05:33:10.832094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.009 [2024-11-20 05:33:10.832110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:39.009 [2024-11-20 05:33:10.832171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:39.009 [2024-11-20 05:33:10.832187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:39.009 pt2 00:22:39.009 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.009 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:39.009 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.009 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.009 [2024-11-20 05:33:10.839690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.266 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.266 "name": "raid_bdev1", 00:22:39.266 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:39.266 "strip_size_kb": 64, 00:22:39.266 "state": "configuring", 00:22:39.266 "raid_level": "raid5f", 00:22:39.266 "superblock": true, 00:22:39.266 "num_base_bdevs": 3, 00:22:39.266 "num_base_bdevs_discovered": 1, 00:22:39.266 "num_base_bdevs_operational": 3, 00:22:39.266 "base_bdevs_list": [ 00:22:39.266 { 00:22:39.266 "name": "pt1", 00:22:39.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:39.266 "is_configured": true, 00:22:39.266 "data_offset": 2048, 00:22:39.266 "data_size": 63488 00:22:39.266 }, 00:22:39.266 { 00:22:39.266 "name": null, 00:22:39.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:39.266 "is_configured": false, 00:22:39.266 "data_offset": 0, 00:22:39.266 "data_size": 63488 00:22:39.266 }, 00:22:39.266 { 00:22:39.267 "name": null, 00:22:39.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:39.267 "is_configured": false, 00:22:39.267 "data_offset": 2048, 00:22:39.267 "data_size": 63488 00:22:39.267 } 00:22:39.267 ] 00:22:39.267 }' 00:22:39.267 05:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.267 05:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.525 [2024-11-20 05:33:11.175726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:39.525 [2024-11-20 05:33:11.175932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.525 [2024-11-20 05:33:11.175950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:39.525 [2024-11-20 05:33:11.175958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.525 [2024-11-20 05:33:11.176311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.525 [2024-11-20 05:33:11.176325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:39.525 [2024-11-20 05:33:11.176403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:39.525 [2024-11-20 05:33:11.176422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:39.525 pt2 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.525 [2024-11-20 05:33:11.183726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:39.525 [2024-11-20 05:33:11.183768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.525 [2024-11-20 05:33:11.183780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:39.525 [2024-11-20 05:33:11.183788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.525 [2024-11-20 05:33:11.184120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.525 [2024-11-20 05:33:11.184133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:39.525 [2024-11-20 05:33:11.184185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:39.525 [2024-11-20 05:33:11.184200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:39.525 [2024-11-20 05:33:11.184295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:39.525 [2024-11-20 05:33:11.184304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:39.525 [2024-11-20 05:33:11.184509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:39.525 [2024-11-20 05:33:11.187243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:39.525 [2024-11-20 05:33:11.187342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:39.525 [2024-11-20 05:33:11.187480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.525 pt3 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.525 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.525 "name": "raid_bdev1", 00:22:39.525 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:39.525 "strip_size_kb": 64, 00:22:39.525 "state": "online", 00:22:39.526 "raid_level": "raid5f", 00:22:39.526 "superblock": true, 00:22:39.526 "num_base_bdevs": 3, 00:22:39.526 "num_base_bdevs_discovered": 3, 00:22:39.526 "num_base_bdevs_operational": 3, 00:22:39.526 "base_bdevs_list": [ 00:22:39.526 { 00:22:39.526 "name": "pt1", 00:22:39.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:39.526 "is_configured": true, 00:22:39.526 "data_offset": 2048, 00:22:39.526 "data_size": 63488 00:22:39.526 }, 00:22:39.526 { 00:22:39.526 "name": "pt2", 00:22:39.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:39.526 "is_configured": true, 00:22:39.526 "data_offset": 2048, 00:22:39.526 "data_size": 63488 00:22:39.526 }, 00:22:39.526 { 00:22:39.526 "name": "pt3", 00:22:39.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:39.526 "is_configured": true, 00:22:39.526 "data_offset": 2048, 00:22:39.526 "data_size": 63488 00:22:39.526 } 00:22:39.526 ] 00:22:39.526 }' 00:22:39.526 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.526 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.784 [2024-11-20 05:33:11.551046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:39.784 "name": "raid_bdev1", 00:22:39.784 "aliases": [ 00:22:39.784 "c426ac0f-4616-4a37-820c-68bf47f4ef3d" 00:22:39.784 ], 00:22:39.784 "product_name": "Raid Volume", 00:22:39.784 "block_size": 512, 00:22:39.784 "num_blocks": 126976, 00:22:39.784 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:39.784 "assigned_rate_limits": { 00:22:39.784 "rw_ios_per_sec": 0, 00:22:39.784 "rw_mbytes_per_sec": 0, 00:22:39.784 "r_mbytes_per_sec": 0, 00:22:39.784 "w_mbytes_per_sec": 0 00:22:39.784 }, 00:22:39.784 "claimed": false, 00:22:39.784 "zoned": false, 00:22:39.784 "supported_io_types": { 00:22:39.784 "read": true, 00:22:39.784 "write": true, 00:22:39.784 "unmap": false, 00:22:39.784 "flush": false, 00:22:39.784 "reset": true, 00:22:39.784 "nvme_admin": false, 00:22:39.784 "nvme_io": false, 00:22:39.784 "nvme_io_md": false, 00:22:39.784 "write_zeroes": true, 00:22:39.784 "zcopy": false, 00:22:39.784 "get_zone_info": false, 00:22:39.784 "zone_management": false, 00:22:39.784 "zone_append": false, 00:22:39.784 "compare": false, 00:22:39.784 "compare_and_write": false, 00:22:39.784 "abort": false, 00:22:39.784 "seek_hole": false, 00:22:39.784 "seek_data": false, 00:22:39.784 "copy": false, 00:22:39.784 "nvme_iov_md": false 00:22:39.784 }, 00:22:39.784 "driver_specific": { 00:22:39.784 "raid": { 00:22:39.784 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:39.784 "strip_size_kb": 64, 00:22:39.784 "state": "online", 00:22:39.784 "raid_level": "raid5f", 00:22:39.784 "superblock": true, 00:22:39.784 "num_base_bdevs": 3, 00:22:39.784 "num_base_bdevs_discovered": 3, 00:22:39.784 "num_base_bdevs_operational": 3, 00:22:39.784 "base_bdevs_list": [ 00:22:39.784 { 00:22:39.784 "name": "pt1", 00:22:39.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:39.784 "is_configured": true, 00:22:39.784 "data_offset": 2048, 00:22:39.784 "data_size": 63488 00:22:39.784 }, 00:22:39.784 { 00:22:39.784 "name": "pt2", 00:22:39.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:39.784 "is_configured": true, 00:22:39.784 "data_offset": 2048, 00:22:39.784 "data_size": 63488 00:22:39.784 }, 00:22:39.784 { 00:22:39.784 "name": "pt3", 00:22:39.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:39.784 "is_configured": true, 00:22:39.784 "data_offset": 2048, 00:22:39.784 "data_size": 63488 00:22:39.784 } 00:22:39.784 ] 00:22:39.784 } 00:22:39.784 } 00:22:39.784 }' 00:22:39.784 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:40.041 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:40.041 pt2 00:22:40.041 pt3' 00:22:40.041 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:40.041 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:40.041 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:40.041 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:40.041 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.042 [2024-11-20 05:33:11.751043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c426ac0f-4616-4a37-820c-68bf47f4ef3d '!=' c426ac0f-4616-4a37-820c-68bf47f4ef3d ']' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.042 [2024-11-20 05:33:11.774931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.042 "name": "raid_bdev1", 00:22:40.042 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:40.042 "strip_size_kb": 64, 00:22:40.042 "state": "online", 00:22:40.042 "raid_level": "raid5f", 00:22:40.042 "superblock": true, 00:22:40.042 "num_base_bdevs": 3, 00:22:40.042 "num_base_bdevs_discovered": 2, 00:22:40.042 "num_base_bdevs_operational": 2, 00:22:40.042 "base_bdevs_list": [ 00:22:40.042 { 00:22:40.042 "name": null, 00:22:40.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.042 "is_configured": false, 00:22:40.042 "data_offset": 0, 00:22:40.042 "data_size": 63488 00:22:40.042 }, 00:22:40.042 { 00:22:40.042 "name": "pt2", 00:22:40.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:40.042 "is_configured": true, 00:22:40.042 "data_offset": 2048, 00:22:40.042 "data_size": 63488 00:22:40.042 }, 00:22:40.042 { 00:22:40.042 "name": "pt3", 00:22:40.042 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:40.042 "is_configured": true, 00:22:40.042 "data_offset": 2048, 00:22:40.042 "data_size": 63488 00:22:40.042 } 00:22:40.042 ] 00:22:40.042 }' 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.042 05:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.300 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:40.300 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.300 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.300 [2024-11-20 05:33:12.094949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:40.300 [2024-11-20 05:33:12.095065] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:40.300 [2024-11-20 05:33:12.095165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.300 [2024-11-20 05:33:12.095216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.300 [2024-11-20 05:33:12.095227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:40.300 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.300 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.300 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.300 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.300 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:40.300 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.557 [2024-11-20 05:33:12.150941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:40.557 [2024-11-20 05:33:12.151250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.557 [2024-11-20 05:33:12.151303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:40.557 [2024-11-20 05:33:12.151347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.557 [2024-11-20 05:33:12.153201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.557 [2024-11-20 05:33:12.153371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:40.557 [2024-11-20 05:33:12.153482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:40.557 [2024-11-20 05:33:12.153522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:40.557 pt2 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.557 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.558 "name": "raid_bdev1", 00:22:40.558 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:40.558 "strip_size_kb": 64, 00:22:40.558 "state": "configuring", 00:22:40.558 "raid_level": "raid5f", 00:22:40.558 "superblock": true, 00:22:40.558 "num_base_bdevs": 3, 00:22:40.558 "num_base_bdevs_discovered": 1, 00:22:40.558 "num_base_bdevs_operational": 2, 00:22:40.558 "base_bdevs_list": [ 00:22:40.558 { 00:22:40.558 "name": null, 00:22:40.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.558 "is_configured": false, 00:22:40.558 "data_offset": 2048, 00:22:40.558 "data_size": 63488 00:22:40.558 }, 00:22:40.558 { 00:22:40.558 "name": "pt2", 00:22:40.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:40.558 "is_configured": true, 00:22:40.558 "data_offset": 2048, 00:22:40.558 "data_size": 63488 00:22:40.558 }, 00:22:40.558 { 00:22:40.558 "name": null, 00:22:40.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:40.558 "is_configured": false, 00:22:40.558 "data_offset": 2048, 00:22:40.558 "data_size": 63488 00:22:40.558 } 00:22:40.558 ] 00:22:40.558 }' 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.558 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.815 [2024-11-20 05:33:12.483018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:40.815 [2024-11-20 05:33:12.483256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.815 [2024-11-20 05:33:12.483311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:40.815 [2024-11-20 05:33:12.483355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.815 [2024-11-20 05:33:12.483737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.815 [2024-11-20 05:33:12.483864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:40.815 [2024-11-20 05:33:12.483992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:40.815 [2024-11-20 05:33:12.484102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:40.815 [2024-11-20 05:33:12.484279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:40.815 [2024-11-20 05:33:12.484402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:40.815 [2024-11-20 05:33:12.484619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:40.815 [2024-11-20 05:33:12.487506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:40.815 [2024-11-20 05:33:12.487585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:40.815 [2024-11-20 05:33:12.487843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.815 pt3 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.815 "name": "raid_bdev1", 00:22:40.815 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:40.815 "strip_size_kb": 64, 00:22:40.815 "state": "online", 00:22:40.815 "raid_level": "raid5f", 00:22:40.815 "superblock": true, 00:22:40.815 "num_base_bdevs": 3, 00:22:40.815 "num_base_bdevs_discovered": 2, 00:22:40.815 "num_base_bdevs_operational": 2, 00:22:40.815 "base_bdevs_list": [ 00:22:40.815 { 00:22:40.815 "name": null, 00:22:40.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.815 "is_configured": false, 00:22:40.815 "data_offset": 2048, 00:22:40.815 "data_size": 63488 00:22:40.815 }, 00:22:40.815 { 00:22:40.815 "name": "pt2", 00:22:40.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:40.815 "is_configured": true, 00:22:40.815 "data_offset": 2048, 00:22:40.815 "data_size": 63488 00:22:40.815 }, 00:22:40.815 { 00:22:40.815 "name": "pt3", 00:22:40.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:40.815 "is_configured": true, 00:22:40.815 "data_offset": 2048, 00:22:40.815 "data_size": 63488 00:22:40.815 } 00:22:40.815 ] 00:22:40.815 }' 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.815 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.099 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:41.099 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.099 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.099 [2024-11-20 05:33:12.791774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:41.099 [2024-11-20 05:33:12.791802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:41.099 [2024-11-20 05:33:12.791860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:41.099 [2024-11-20 05:33:12.791911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:41.099 [2024-11-20 05:33:12.791919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.100 [2024-11-20 05:33:12.843807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:41.100 [2024-11-20 05:33:12.844076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.100 [2024-11-20 05:33:12.844188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:41.100 [2024-11-20 05:33:12.844230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.100 [2024-11-20 05:33:12.846132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.100 [2024-11-20 05:33:12.846267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:41.100 [2024-11-20 05:33:12.846427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:41.100 [2024-11-20 05:33:12.846480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:41.100 [2024-11-20 05:33:12.846641] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:41.100 [2024-11-20 05:33:12.846705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:41.100 [2024-11-20 05:33:12.846729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:41.100 [2024-11-20 05:33:12.846812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:41.100 pt1 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.100 "name": "raid_bdev1", 00:22:41.100 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:41.100 "strip_size_kb": 64, 00:22:41.100 "state": "configuring", 00:22:41.100 "raid_level": "raid5f", 00:22:41.100 "superblock": true, 00:22:41.100 "num_base_bdevs": 3, 00:22:41.100 "num_base_bdevs_discovered": 1, 00:22:41.100 "num_base_bdevs_operational": 2, 00:22:41.100 "base_bdevs_list": [ 00:22:41.100 { 00:22:41.100 "name": null, 00:22:41.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.100 "is_configured": false, 00:22:41.100 "data_offset": 2048, 00:22:41.100 "data_size": 63488 00:22:41.100 }, 00:22:41.100 { 00:22:41.100 "name": "pt2", 00:22:41.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.100 "is_configured": true, 00:22:41.100 "data_offset": 2048, 00:22:41.100 "data_size": 63488 00:22:41.100 }, 00:22:41.100 { 00:22:41.100 "name": null, 00:22:41.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:41.100 "is_configured": false, 00:22:41.100 "data_offset": 2048, 00:22:41.100 "data_size": 63488 00:22:41.100 } 00:22:41.100 ] 00:22:41.100 }' 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.100 05:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.358 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:22:41.358 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.358 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.358 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:41.358 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.616 [2024-11-20 05:33:13.211885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:41.616 [2024-11-20 05:33:13.212260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.616 [2024-11-20 05:33:13.212290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:41.616 [2024-11-20 05:33:13.212298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.616 [2024-11-20 05:33:13.212673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.616 [2024-11-20 05:33:13.212685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:41.616 [2024-11-20 05:33:13.212746] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:41.616 [2024-11-20 05:33:13.212761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:41.616 [2024-11-20 05:33:13.212851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:41.616 [2024-11-20 05:33:13.212858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:41.616 [2024-11-20 05:33:13.213046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:41.616 [2024-11-20 05:33:13.216069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:41.616 [2024-11-20 05:33:13.216089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:41.616 [2024-11-20 05:33:13.216276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.616 pt3 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.616 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.616 "name": "raid_bdev1", 00:22:41.616 "uuid": "c426ac0f-4616-4a37-820c-68bf47f4ef3d", 00:22:41.616 "strip_size_kb": 64, 00:22:41.616 "state": "online", 00:22:41.616 "raid_level": "raid5f", 00:22:41.616 "superblock": true, 00:22:41.616 "num_base_bdevs": 3, 00:22:41.616 "num_base_bdevs_discovered": 2, 00:22:41.616 "num_base_bdevs_operational": 2, 00:22:41.616 "base_bdevs_list": [ 00:22:41.616 { 00:22:41.616 "name": null, 00:22:41.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.616 "is_configured": false, 00:22:41.616 "data_offset": 2048, 00:22:41.617 "data_size": 63488 00:22:41.617 }, 00:22:41.617 { 00:22:41.617 "name": "pt2", 00:22:41.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.617 "is_configured": true, 00:22:41.617 "data_offset": 2048, 00:22:41.617 "data_size": 63488 00:22:41.617 }, 00:22:41.617 { 00:22:41.617 "name": "pt3", 00:22:41.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:41.617 "is_configured": true, 00:22:41.617 "data_offset": 2048, 00:22:41.617 "data_size": 63488 00:22:41.617 } 00:22:41.617 ] 00:22:41.617 }' 00:22:41.617 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.617 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.874 [2024-11-20 05:33:13.556445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c426ac0f-4616-4a37-820c-68bf47f4ef3d '!=' c426ac0f-4616-4a37-820c-68bf47f4ef3d ']' 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78946 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 78946 ']' 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 78946 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78946 00:22:41.874 killing process with pid 78946 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78946' 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 78946 00:22:41.874 [2024-11-20 05:33:13.611134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:41.874 [2024-11-20 05:33:13.611208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:41.874 05:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 78946 00:22:41.874 [2024-11-20 05:33:13.611257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:41.874 [2024-11-20 05:33:13.611267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:42.132 [2024-11-20 05:33:13.761182] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:42.699 ************************************ 00:22:42.699 END TEST raid5f_superblock_test 00:22:42.699 ************************************ 00:22:42.699 05:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:42.699 00:22:42.699 real 0m5.591s 00:22:42.699 user 0m8.851s 00:22:42.699 sys 0m0.964s 00:22:42.699 05:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:42.699 05:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.699 05:33:14 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:22:42.699 05:33:14 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:22:42.699 05:33:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:42.699 05:33:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:42.699 05:33:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:42.699 ************************************ 00:22:42.699 START TEST raid5f_rebuild_test 00:22:42.699 ************************************ 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:42.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:22:42.699 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79368 00:22:42.700 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79368 00:22:42.700 05:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 79368 ']' 00:22:42.700 05:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.700 05:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:42.700 05:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.700 05:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:42.700 05:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.700 05:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:42.700 [2024-11-20 05:33:14.438720] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:22:42.700 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:42.700 Zero copy mechanism will not be used. 00:22:42.700 [2024-11-20 05:33:14.439419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79368 ] 00:22:42.958 [2024-11-20 05:33:14.591100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.958 [2024-11-20 05:33:14.674222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.958 [2024-11-20 05:33:14.783435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.958 [2024-11-20 05:33:14.783468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.523 BaseBdev1_malloc 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.523 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.524 [2024-11-20 05:33:15.269247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:43.524 [2024-11-20 05:33:15.269300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.524 [2024-11-20 05:33:15.269318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:43.524 [2024-11-20 05:33:15.269328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.524 [2024-11-20 05:33:15.271079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.524 [2024-11-20 05:33:15.271112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:43.524 BaseBdev1 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.524 BaseBdev2_malloc 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.524 [2024-11-20 05:33:15.301126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:43.524 [2024-11-20 05:33:15.301285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.524 [2024-11-20 05:33:15.301305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:43.524 [2024-11-20 05:33:15.301315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.524 [2024-11-20 05:33:15.303062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.524 [2024-11-20 05:33:15.303089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:43.524 BaseBdev2 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.524 BaseBdev3_malloc 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.524 [2024-11-20 05:33:15.351190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:43.524 [2024-11-20 05:33:15.351240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.524 [2024-11-20 05:33:15.351257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:43.524 [2024-11-20 05:33:15.351267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.524 [2024-11-20 05:33:15.353008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.524 [2024-11-20 05:33:15.353041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:43.524 BaseBdev3 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.524 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.783 spare_malloc 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.783 spare_delay 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.783 [2024-11-20 05:33:15.390823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:43.783 [2024-11-20 05:33:15.390867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.783 [2024-11-20 05:33:15.390881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:43.783 [2024-11-20 05:33:15.390889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.783 [2024-11-20 05:33:15.392627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.783 [2024-11-20 05:33:15.392660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:43.783 spare 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.783 [2024-11-20 05:33:15.398879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.783 [2024-11-20 05:33:15.400392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:43.783 [2024-11-20 05:33:15.400445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:43.783 [2024-11-20 05:33:15.400508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:43.783 [2024-11-20 05:33:15.400517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:43.783 [2024-11-20 05:33:15.400727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:43.783 [2024-11-20 05:33:15.403703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:43.783 [2024-11-20 05:33:15.403719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:43.783 [2024-11-20 05:33:15.403864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.783 "name": "raid_bdev1", 00:22:43.783 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:43.783 "strip_size_kb": 64, 00:22:43.783 "state": "online", 00:22:43.783 "raid_level": "raid5f", 00:22:43.783 "superblock": false, 00:22:43.783 "num_base_bdevs": 3, 00:22:43.783 "num_base_bdevs_discovered": 3, 00:22:43.783 "num_base_bdevs_operational": 3, 00:22:43.783 "base_bdevs_list": [ 00:22:43.783 { 00:22:43.783 "name": "BaseBdev1", 00:22:43.783 "uuid": "89e6dbd5-664a-54ba-ab3c-1ab2710e6c33", 00:22:43.783 "is_configured": true, 00:22:43.783 "data_offset": 0, 00:22:43.783 "data_size": 65536 00:22:43.783 }, 00:22:43.783 { 00:22:43.783 "name": "BaseBdev2", 00:22:43.783 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:43.783 "is_configured": true, 00:22:43.783 "data_offset": 0, 00:22:43.783 "data_size": 65536 00:22:43.783 }, 00:22:43.783 { 00:22:43.783 "name": "BaseBdev3", 00:22:43.783 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:43.783 "is_configured": true, 00:22:43.783 "data_offset": 0, 00:22:43.783 "data_size": 65536 00:22:43.783 } 00:22:43.783 ] 00:22:43.783 }' 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.783 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:44.042 [2024-11-20 05:33:15.732111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:44.042 05:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:44.300 [2024-11-20 05:33:16.060045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:44.300 /dev/nbd0 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:44.300 1+0 records in 00:22:44.300 1+0 records out 00:22:44.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270943 s, 15.1 MB/s 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.300 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:44.301 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:44.301 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:44.301 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:44.301 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:44.301 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:44.301 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:44.301 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:44.866 512+0 records in 00:22:44.866 512+0 records out 00:22:44.866 67108864 bytes (67 MB, 64 MiB) copied, 0.334077 s, 201 MB/s 00:22:44.866 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:44.866 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:44.866 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:44.866 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:44.866 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:44.866 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:44.866 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:45.123 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:45.123 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:45.123 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:45.123 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:45.123 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:45.123 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:45.123 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:45.123 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:45.123 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:45.123 [2024-11-20 05:33:16.750359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.124 [2024-11-20 05:33:16.754121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.124 "name": "raid_bdev1", 00:22:45.124 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:45.124 "strip_size_kb": 64, 00:22:45.124 "state": "online", 00:22:45.124 "raid_level": "raid5f", 00:22:45.124 "superblock": false, 00:22:45.124 "num_base_bdevs": 3, 00:22:45.124 "num_base_bdevs_discovered": 2, 00:22:45.124 "num_base_bdevs_operational": 2, 00:22:45.124 "base_bdevs_list": [ 00:22:45.124 { 00:22:45.124 "name": null, 00:22:45.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.124 "is_configured": false, 00:22:45.124 "data_offset": 0, 00:22:45.124 "data_size": 65536 00:22:45.124 }, 00:22:45.124 { 00:22:45.124 "name": "BaseBdev2", 00:22:45.124 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:45.124 "is_configured": true, 00:22:45.124 "data_offset": 0, 00:22:45.124 "data_size": 65536 00:22:45.124 }, 00:22:45.124 { 00:22:45.124 "name": "BaseBdev3", 00:22:45.124 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:45.124 "is_configured": true, 00:22:45.124 "data_offset": 0, 00:22:45.124 "data_size": 65536 00:22:45.124 } 00:22:45.124 ] 00:22:45.124 }' 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.124 05:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.382 05:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:45.382 05:33:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.382 05:33:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.382 [2024-11-20 05:33:17.066190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:45.382 [2024-11-20 05:33:17.075255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:22:45.382 05:33:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.382 05:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:45.382 [2024-11-20 05:33:17.079902] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.315 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:46.315 "name": "raid_bdev1", 00:22:46.315 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:46.315 "strip_size_kb": 64, 00:22:46.315 "state": "online", 00:22:46.315 "raid_level": "raid5f", 00:22:46.315 "superblock": false, 00:22:46.315 "num_base_bdevs": 3, 00:22:46.315 "num_base_bdevs_discovered": 3, 00:22:46.315 "num_base_bdevs_operational": 3, 00:22:46.315 "process": { 00:22:46.315 "type": "rebuild", 00:22:46.315 "target": "spare", 00:22:46.315 "progress": { 00:22:46.315 "blocks": 20480, 00:22:46.316 "percent": 15 00:22:46.316 } 00:22:46.316 }, 00:22:46.316 "base_bdevs_list": [ 00:22:46.316 { 00:22:46.316 "name": "spare", 00:22:46.316 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:46.316 "is_configured": true, 00:22:46.316 "data_offset": 0, 00:22:46.316 "data_size": 65536 00:22:46.316 }, 00:22:46.316 { 00:22:46.316 "name": "BaseBdev2", 00:22:46.316 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:46.316 "is_configured": true, 00:22:46.316 "data_offset": 0, 00:22:46.316 "data_size": 65536 00:22:46.316 }, 00:22:46.316 { 00:22:46.316 "name": "BaseBdev3", 00:22:46.316 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:46.316 "is_configured": true, 00:22:46.316 "data_offset": 0, 00:22:46.316 "data_size": 65536 00:22:46.316 } 00:22:46.316 ] 00:22:46.316 }' 00:22:46.316 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:46.316 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:46.316 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.576 [2024-11-20 05:33:18.177033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:46.576 [2024-11-20 05:33:18.189016] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:46.576 [2024-11-20 05:33:18.189068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.576 [2024-11-20 05:33:18.189084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:46.576 [2024-11-20 05:33:18.189090] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.576 "name": "raid_bdev1", 00:22:46.576 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:46.576 "strip_size_kb": 64, 00:22:46.576 "state": "online", 00:22:46.576 "raid_level": "raid5f", 00:22:46.576 "superblock": false, 00:22:46.576 "num_base_bdevs": 3, 00:22:46.576 "num_base_bdevs_discovered": 2, 00:22:46.576 "num_base_bdevs_operational": 2, 00:22:46.576 "base_bdevs_list": [ 00:22:46.576 { 00:22:46.576 "name": null, 00:22:46.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.576 "is_configured": false, 00:22:46.576 "data_offset": 0, 00:22:46.576 "data_size": 65536 00:22:46.576 }, 00:22:46.576 { 00:22:46.576 "name": "BaseBdev2", 00:22:46.576 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:46.576 "is_configured": true, 00:22:46.576 "data_offset": 0, 00:22:46.576 "data_size": 65536 00:22:46.576 }, 00:22:46.576 { 00:22:46.576 "name": "BaseBdev3", 00:22:46.576 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:46.576 "is_configured": true, 00:22:46.576 "data_offset": 0, 00:22:46.576 "data_size": 65536 00:22:46.576 } 00:22:46.576 ] 00:22:46.576 }' 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.576 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:46.835 "name": "raid_bdev1", 00:22:46.835 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:46.835 "strip_size_kb": 64, 00:22:46.835 "state": "online", 00:22:46.835 "raid_level": "raid5f", 00:22:46.835 "superblock": false, 00:22:46.835 "num_base_bdevs": 3, 00:22:46.835 "num_base_bdevs_discovered": 2, 00:22:46.835 "num_base_bdevs_operational": 2, 00:22:46.835 "base_bdevs_list": [ 00:22:46.835 { 00:22:46.835 "name": null, 00:22:46.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.835 "is_configured": false, 00:22:46.835 "data_offset": 0, 00:22:46.835 "data_size": 65536 00:22:46.835 }, 00:22:46.835 { 00:22:46.835 "name": "BaseBdev2", 00:22:46.835 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:46.835 "is_configured": true, 00:22:46.835 "data_offset": 0, 00:22:46.835 "data_size": 65536 00:22:46.835 }, 00:22:46.835 { 00:22:46.835 "name": "BaseBdev3", 00:22:46.835 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:46.835 "is_configured": true, 00:22:46.835 "data_offset": 0, 00:22:46.835 "data_size": 65536 00:22:46.835 } 00:22:46.835 ] 00:22:46.835 }' 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.835 [2024-11-20 05:33:18.615254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:46.835 [2024-11-20 05:33:18.623697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.835 05:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:46.835 [2024-11-20 05:33:18.628080] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.207 "name": "raid_bdev1", 00:22:48.207 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:48.207 "strip_size_kb": 64, 00:22:48.207 "state": "online", 00:22:48.207 "raid_level": "raid5f", 00:22:48.207 "superblock": false, 00:22:48.207 "num_base_bdevs": 3, 00:22:48.207 "num_base_bdevs_discovered": 3, 00:22:48.207 "num_base_bdevs_operational": 3, 00:22:48.207 "process": { 00:22:48.207 "type": "rebuild", 00:22:48.207 "target": "spare", 00:22:48.207 "progress": { 00:22:48.207 "blocks": 20480, 00:22:48.207 "percent": 15 00:22:48.207 } 00:22:48.207 }, 00:22:48.207 "base_bdevs_list": [ 00:22:48.207 { 00:22:48.207 "name": "spare", 00:22:48.207 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:48.207 "is_configured": true, 00:22:48.207 "data_offset": 0, 00:22:48.207 "data_size": 65536 00:22:48.207 }, 00:22:48.207 { 00:22:48.207 "name": "BaseBdev2", 00:22:48.207 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:48.207 "is_configured": true, 00:22:48.207 "data_offset": 0, 00:22:48.207 "data_size": 65536 00:22:48.207 }, 00:22:48.207 { 00:22:48.207 "name": "BaseBdev3", 00:22:48.207 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:48.207 "is_configured": true, 00:22:48.207 "data_offset": 0, 00:22:48.207 "data_size": 65536 00:22:48.207 } 00:22:48.207 ] 00:22:48.207 }' 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=434 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.207 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.207 "name": "raid_bdev1", 00:22:48.207 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:48.207 "strip_size_kb": 64, 00:22:48.207 "state": "online", 00:22:48.207 "raid_level": "raid5f", 00:22:48.207 "superblock": false, 00:22:48.207 "num_base_bdevs": 3, 00:22:48.207 "num_base_bdevs_discovered": 3, 00:22:48.207 "num_base_bdevs_operational": 3, 00:22:48.207 "process": { 00:22:48.207 "type": "rebuild", 00:22:48.207 "target": "spare", 00:22:48.207 "progress": { 00:22:48.207 "blocks": 20480, 00:22:48.207 "percent": 15 00:22:48.207 } 00:22:48.207 }, 00:22:48.207 "base_bdevs_list": [ 00:22:48.207 { 00:22:48.207 "name": "spare", 00:22:48.207 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:48.207 "is_configured": true, 00:22:48.207 "data_offset": 0, 00:22:48.207 "data_size": 65536 00:22:48.207 }, 00:22:48.207 { 00:22:48.207 "name": "BaseBdev2", 00:22:48.207 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:48.207 "is_configured": true, 00:22:48.207 "data_offset": 0, 00:22:48.208 "data_size": 65536 00:22:48.208 }, 00:22:48.208 { 00:22:48.208 "name": "BaseBdev3", 00:22:48.208 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:48.208 "is_configured": true, 00:22:48.208 "data_offset": 0, 00:22:48.208 "data_size": 65536 00:22:48.208 } 00:22:48.208 ] 00:22:48.208 }' 00:22:48.208 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.208 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.208 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.208 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.208 05:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:49.141 "name": "raid_bdev1", 00:22:49.141 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:49.141 "strip_size_kb": 64, 00:22:49.141 "state": "online", 00:22:49.141 "raid_level": "raid5f", 00:22:49.141 "superblock": false, 00:22:49.141 "num_base_bdevs": 3, 00:22:49.141 "num_base_bdevs_discovered": 3, 00:22:49.141 "num_base_bdevs_operational": 3, 00:22:49.141 "process": { 00:22:49.141 "type": "rebuild", 00:22:49.141 "target": "spare", 00:22:49.141 "progress": { 00:22:49.141 "blocks": 43008, 00:22:49.141 "percent": 32 00:22:49.141 } 00:22:49.141 }, 00:22:49.141 "base_bdevs_list": [ 00:22:49.141 { 00:22:49.141 "name": "spare", 00:22:49.141 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:49.141 "is_configured": true, 00:22:49.141 "data_offset": 0, 00:22:49.141 "data_size": 65536 00:22:49.141 }, 00:22:49.141 { 00:22:49.141 "name": "BaseBdev2", 00:22:49.141 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:49.141 "is_configured": true, 00:22:49.141 "data_offset": 0, 00:22:49.141 "data_size": 65536 00:22:49.141 }, 00:22:49.141 { 00:22:49.141 "name": "BaseBdev3", 00:22:49.141 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:49.141 "is_configured": true, 00:22:49.141 "data_offset": 0, 00:22:49.141 "data_size": 65536 00:22:49.141 } 00:22:49.141 ] 00:22:49.141 }' 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.141 05:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:50.512 "name": "raid_bdev1", 00:22:50.512 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:50.512 "strip_size_kb": 64, 00:22:50.512 "state": "online", 00:22:50.512 "raid_level": "raid5f", 00:22:50.512 "superblock": false, 00:22:50.512 "num_base_bdevs": 3, 00:22:50.512 "num_base_bdevs_discovered": 3, 00:22:50.512 "num_base_bdevs_operational": 3, 00:22:50.512 "process": { 00:22:50.512 "type": "rebuild", 00:22:50.512 "target": "spare", 00:22:50.512 "progress": { 00:22:50.512 "blocks": 65536, 00:22:50.512 "percent": 50 00:22:50.512 } 00:22:50.512 }, 00:22:50.512 "base_bdevs_list": [ 00:22:50.512 { 00:22:50.512 "name": "spare", 00:22:50.512 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:50.512 "is_configured": true, 00:22:50.512 "data_offset": 0, 00:22:50.512 "data_size": 65536 00:22:50.512 }, 00:22:50.512 { 00:22:50.512 "name": "BaseBdev2", 00:22:50.512 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:50.512 "is_configured": true, 00:22:50.512 "data_offset": 0, 00:22:50.512 "data_size": 65536 00:22:50.512 }, 00:22:50.512 { 00:22:50.512 "name": "BaseBdev3", 00:22:50.512 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:50.512 "is_configured": true, 00:22:50.512 "data_offset": 0, 00:22:50.512 "data_size": 65536 00:22:50.512 } 00:22:50.512 ] 00:22:50.512 }' 00:22:50.512 05:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:50.512 05:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:50.512 05:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:50.512 05:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:50.512 05:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.443 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:51.443 "name": "raid_bdev1", 00:22:51.443 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:51.443 "strip_size_kb": 64, 00:22:51.443 "state": "online", 00:22:51.443 "raid_level": "raid5f", 00:22:51.443 "superblock": false, 00:22:51.443 "num_base_bdevs": 3, 00:22:51.443 "num_base_bdevs_discovered": 3, 00:22:51.443 "num_base_bdevs_operational": 3, 00:22:51.443 "process": { 00:22:51.443 "type": "rebuild", 00:22:51.443 "target": "spare", 00:22:51.443 "progress": { 00:22:51.443 "blocks": 88064, 00:22:51.443 "percent": 67 00:22:51.443 } 00:22:51.443 }, 00:22:51.443 "base_bdevs_list": [ 00:22:51.443 { 00:22:51.443 "name": "spare", 00:22:51.444 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:51.444 "is_configured": true, 00:22:51.444 "data_offset": 0, 00:22:51.444 "data_size": 65536 00:22:51.444 }, 00:22:51.444 { 00:22:51.444 "name": "BaseBdev2", 00:22:51.444 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:51.444 "is_configured": true, 00:22:51.444 "data_offset": 0, 00:22:51.444 "data_size": 65536 00:22:51.444 }, 00:22:51.444 { 00:22:51.444 "name": "BaseBdev3", 00:22:51.444 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:51.444 "is_configured": true, 00:22:51.444 "data_offset": 0, 00:22:51.444 "data_size": 65536 00:22:51.444 } 00:22:51.444 ] 00:22:51.444 }' 00:22:51.444 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:51.444 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.444 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:51.444 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.444 05:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:52.376 "name": "raid_bdev1", 00:22:52.376 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:52.376 "strip_size_kb": 64, 00:22:52.376 "state": "online", 00:22:52.376 "raid_level": "raid5f", 00:22:52.376 "superblock": false, 00:22:52.376 "num_base_bdevs": 3, 00:22:52.376 "num_base_bdevs_discovered": 3, 00:22:52.376 "num_base_bdevs_operational": 3, 00:22:52.376 "process": { 00:22:52.376 "type": "rebuild", 00:22:52.376 "target": "spare", 00:22:52.376 "progress": { 00:22:52.376 "blocks": 110592, 00:22:52.376 "percent": 84 00:22:52.376 } 00:22:52.376 }, 00:22:52.376 "base_bdevs_list": [ 00:22:52.376 { 00:22:52.376 "name": "spare", 00:22:52.376 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:52.376 "is_configured": true, 00:22:52.376 "data_offset": 0, 00:22:52.376 "data_size": 65536 00:22:52.376 }, 00:22:52.376 { 00:22:52.376 "name": "BaseBdev2", 00:22:52.376 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:52.376 "is_configured": true, 00:22:52.376 "data_offset": 0, 00:22:52.376 "data_size": 65536 00:22:52.376 }, 00:22:52.376 { 00:22:52.376 "name": "BaseBdev3", 00:22:52.376 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:52.376 "is_configured": true, 00:22:52.376 "data_offset": 0, 00:22:52.376 "data_size": 65536 00:22:52.376 } 00:22:52.376 ] 00:22:52.376 }' 00:22:52.376 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:52.634 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.634 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:52.634 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.634 05:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:53.566 [2024-11-20 05:33:25.077860] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:53.566 [2024-11-20 05:33:25.077934] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:53.566 [2024-11-20 05:33:25.077975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.566 "name": "raid_bdev1", 00:22:53.566 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:53.566 "strip_size_kb": 64, 00:22:53.566 "state": "online", 00:22:53.566 "raid_level": "raid5f", 00:22:53.566 "superblock": false, 00:22:53.566 "num_base_bdevs": 3, 00:22:53.566 "num_base_bdevs_discovered": 3, 00:22:53.566 "num_base_bdevs_operational": 3, 00:22:53.566 "base_bdevs_list": [ 00:22:53.566 { 00:22:53.566 "name": "spare", 00:22:53.566 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:53.566 "is_configured": true, 00:22:53.566 "data_offset": 0, 00:22:53.566 "data_size": 65536 00:22:53.566 }, 00:22:53.566 { 00:22:53.566 "name": "BaseBdev2", 00:22:53.566 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:53.566 "is_configured": true, 00:22:53.566 "data_offset": 0, 00:22:53.566 "data_size": 65536 00:22:53.566 }, 00:22:53.566 { 00:22:53.566 "name": "BaseBdev3", 00:22:53.566 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:53.566 "is_configured": true, 00:22:53.566 "data_offset": 0, 00:22:53.566 "data_size": 65536 00:22:53.566 } 00:22:53.566 ] 00:22:53.566 }' 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.566 "name": "raid_bdev1", 00:22:53.566 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:53.566 "strip_size_kb": 64, 00:22:53.566 "state": "online", 00:22:53.566 "raid_level": "raid5f", 00:22:53.566 "superblock": false, 00:22:53.566 "num_base_bdevs": 3, 00:22:53.566 "num_base_bdevs_discovered": 3, 00:22:53.566 "num_base_bdevs_operational": 3, 00:22:53.566 "base_bdevs_list": [ 00:22:53.566 { 00:22:53.566 "name": "spare", 00:22:53.566 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:53.566 "is_configured": true, 00:22:53.566 "data_offset": 0, 00:22:53.566 "data_size": 65536 00:22:53.566 }, 00:22:53.566 { 00:22:53.566 "name": "BaseBdev2", 00:22:53.566 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:53.566 "is_configured": true, 00:22:53.566 "data_offset": 0, 00:22:53.566 "data_size": 65536 00:22:53.566 }, 00:22:53.566 { 00:22:53.566 "name": "BaseBdev3", 00:22:53.566 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:53.566 "is_configured": true, 00:22:53.566 "data_offset": 0, 00:22:53.566 "data_size": 65536 00:22:53.566 } 00:22:53.566 ] 00:22:53.566 }' 00:22:53.566 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.825 "name": "raid_bdev1", 00:22:53.825 "uuid": "37343696-dd31-41e6-ad50-2acb63fa36cb", 00:22:53.825 "strip_size_kb": 64, 00:22:53.825 "state": "online", 00:22:53.825 "raid_level": "raid5f", 00:22:53.825 "superblock": false, 00:22:53.825 "num_base_bdevs": 3, 00:22:53.825 "num_base_bdevs_discovered": 3, 00:22:53.825 "num_base_bdevs_operational": 3, 00:22:53.825 "base_bdevs_list": [ 00:22:53.825 { 00:22:53.825 "name": "spare", 00:22:53.825 "uuid": "badc6c96-f75f-5573-a357-a5ce8f644ec2", 00:22:53.825 "is_configured": true, 00:22:53.825 "data_offset": 0, 00:22:53.825 "data_size": 65536 00:22:53.825 }, 00:22:53.825 { 00:22:53.825 "name": "BaseBdev2", 00:22:53.825 "uuid": "8b39a6ff-b8b7-52c2-8f56-7faff5fac7db", 00:22:53.825 "is_configured": true, 00:22:53.825 "data_offset": 0, 00:22:53.825 "data_size": 65536 00:22:53.825 }, 00:22:53.825 { 00:22:53.825 "name": "BaseBdev3", 00:22:53.825 "uuid": "5bb71d06-ef78-5238-b6e2-d329b61829e3", 00:22:53.825 "is_configured": true, 00:22:53.825 "data_offset": 0, 00:22:53.825 "data_size": 65536 00:22:53.825 } 00:22:53.825 ] 00:22:53.825 }' 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.825 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.084 [2024-11-20 05:33:25.739544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:54.084 [2024-11-20 05:33:25.739570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.084 [2024-11-20 05:33:25.739635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.084 [2024-11-20 05:33:25.739702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.084 [2024-11-20 05:33:25.739714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.084 05:33:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:54.342 /dev/nbd0 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.342 1+0 records in 00:22:54.342 1+0 records out 00:22:54.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218831 s, 18.7 MB/s 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.342 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:54.600 /dev/nbd1 00:22:54.600 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.601 1+0 records in 00:22:54.601 1+0 records out 00:22:54.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253494 s, 16.2 MB/s 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.601 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.859 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79368 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 79368 ']' 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 79368 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79368 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:55.117 killing process with pid 79368 00:22:55.117 Received shutdown signal, test time was about 60.000000 seconds 00:22:55.117 00:22:55.117 Latency(us) 00:22:55.117 [2024-11-20T05:33:26.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.117 [2024-11-20T05:33:26.952Z] =================================================================================================================== 00:22:55.117 [2024-11-20T05:33:26.952Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79368' 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 79368 00:22:55.117 05:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 79368 00:22:55.117 [2024-11-20 05:33:26.819123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:55.375 [2024-11-20 05:33:27.011058] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:55.972 ************************************ 00:22:55.972 END TEST raid5f_rebuild_test 00:22:55.972 ************************************ 00:22:55.972 00:22:55.972 real 0m13.195s 00:22:55.972 user 0m16.096s 00:22:55.972 sys 0m1.478s 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.972 05:33:27 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:22:55.972 05:33:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:55.972 05:33:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:55.972 05:33:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:55.972 ************************************ 00:22:55.972 START TEST raid5f_rebuild_test_sb 00:22:55.972 ************************************ 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:55.972 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79786 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79786 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 79786 ']' 00:22:55.973 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:55.974 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.974 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:55.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.974 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.974 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:55.974 05:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.974 [2024-11-20 05:33:27.671684] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:22:55.974 [2024-11-20 05:33:27.671811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79786 ] 00:22:55.974 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:55.974 Zero copy mechanism will not be used. 00:22:56.238 [2024-11-20 05:33:27.830104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.238 [2024-11-20 05:33:27.918690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.238 [2024-11-20 05:33:28.034026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.238 [2024-11-20 05:33:28.034065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.804 BaseBdev1_malloc 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.804 [2024-11-20 05:33:28.467240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:56.804 [2024-11-20 05:33:28.467296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.804 [2024-11-20 05:33:28.467312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:56.804 [2024-11-20 05:33:28.467321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.804 [2024-11-20 05:33:28.469078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.804 [2024-11-20 05:33:28.469111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:56.804 BaseBdev1 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.804 BaseBdev2_malloc 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.804 [2024-11-20 05:33:28.498867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:56.804 [2024-11-20 05:33:28.498916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.804 [2024-11-20 05:33:28.498931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:56.804 [2024-11-20 05:33:28.498941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.804 [2024-11-20 05:33:28.500675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.804 [2024-11-20 05:33:28.500707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:56.804 BaseBdev2 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.804 BaseBdev3_malloc 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.804 [2024-11-20 05:33:28.551686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:56.804 [2024-11-20 05:33:28.551738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.804 [2024-11-20 05:33:28.551754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:56.804 [2024-11-20 05:33:28.551763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.804 [2024-11-20 05:33:28.553573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.804 [2024-11-20 05:33:28.553605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:56.804 BaseBdev3 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.804 spare_malloc 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.804 spare_delay 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.804 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.804 [2024-11-20 05:33:28.595604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:56.804 [2024-11-20 05:33:28.595649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.804 [2024-11-20 05:33:28.595664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:56.804 [2024-11-20 05:33:28.595673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.805 [2024-11-20 05:33:28.597471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.805 [2024-11-20 05:33:28.597503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:56.805 spare 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.805 [2024-11-20 05:33:28.603660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:56.805 [2024-11-20 05:33:28.605155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:56.805 [2024-11-20 05:33:28.605213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:56.805 [2024-11-20 05:33:28.605347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:56.805 [2024-11-20 05:33:28.605361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:56.805 [2024-11-20 05:33:28.605578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:56.805 [2024-11-20 05:33:28.608556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:56.805 [2024-11-20 05:33:28.608576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:56.805 [2024-11-20 05:33:28.608710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.805 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.063 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.063 "name": "raid_bdev1", 00:22:57.063 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:22:57.063 "strip_size_kb": 64, 00:22:57.063 "state": "online", 00:22:57.063 "raid_level": "raid5f", 00:22:57.063 "superblock": true, 00:22:57.063 "num_base_bdevs": 3, 00:22:57.063 "num_base_bdevs_discovered": 3, 00:22:57.063 "num_base_bdevs_operational": 3, 00:22:57.063 "base_bdevs_list": [ 00:22:57.063 { 00:22:57.063 "name": "BaseBdev1", 00:22:57.063 "uuid": "925abcf6-1182-5fc7-9978-edc577e13451", 00:22:57.063 "is_configured": true, 00:22:57.063 "data_offset": 2048, 00:22:57.063 "data_size": 63488 00:22:57.063 }, 00:22:57.063 { 00:22:57.063 "name": "BaseBdev2", 00:22:57.063 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:22:57.063 "is_configured": true, 00:22:57.063 "data_offset": 2048, 00:22:57.063 "data_size": 63488 00:22:57.063 }, 00:22:57.063 { 00:22:57.063 "name": "BaseBdev3", 00:22:57.063 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:22:57.063 "is_configured": true, 00:22:57.063 "data_offset": 2048, 00:22:57.063 "data_size": 63488 00:22:57.063 } 00:22:57.063 ] 00:22:57.063 }' 00:22:57.063 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.063 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:57.321 [2024-11-20 05:33:28.912945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:57.321 05:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:57.579 [2024-11-20 05:33:29.164885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:57.579 /dev/nbd0 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:57.579 1+0 records in 00:22:57.579 1+0 records out 00:22:57.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604244 s, 6.8 MB/s 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:57.579 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:57.837 496+0 records in 00:22:57.837 496+0 records out 00:22:57.837 65011712 bytes (65 MB, 62 MiB) copied, 0.266482 s, 244 MB/s 00:22:57.837 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:57.837 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:57.837 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:57.837 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:57.837 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:57.837 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:57.837 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:57.837 [2024-11-20 05:33:29.647935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.837 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.096 [2024-11-20 05:33:29.679527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.096 "name": "raid_bdev1", 00:22:58.096 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:22:58.096 "strip_size_kb": 64, 00:22:58.096 "state": "online", 00:22:58.096 "raid_level": "raid5f", 00:22:58.096 "superblock": true, 00:22:58.096 "num_base_bdevs": 3, 00:22:58.096 "num_base_bdevs_discovered": 2, 00:22:58.096 "num_base_bdevs_operational": 2, 00:22:58.096 "base_bdevs_list": [ 00:22:58.096 { 00:22:58.096 "name": null, 00:22:58.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.096 "is_configured": false, 00:22:58.096 "data_offset": 0, 00:22:58.096 "data_size": 63488 00:22:58.096 }, 00:22:58.096 { 00:22:58.096 "name": "BaseBdev2", 00:22:58.096 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:22:58.096 "is_configured": true, 00:22:58.096 "data_offset": 2048, 00:22:58.096 "data_size": 63488 00:22:58.096 }, 00:22:58.096 { 00:22:58.096 "name": "BaseBdev3", 00:22:58.096 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:22:58.096 "is_configured": true, 00:22:58.096 "data_offset": 2048, 00:22:58.096 "data_size": 63488 00:22:58.096 } 00:22:58.096 ] 00:22:58.096 }' 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.096 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.364 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:58.364 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.364 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.364 [2024-11-20 05:33:29.991576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:58.364 [2024-11-20 05:33:30.000605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:22:58.364 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.364 05:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:58.364 [2024-11-20 05:33:30.005211] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.325 "name": "raid_bdev1", 00:22:59.325 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:22:59.325 "strip_size_kb": 64, 00:22:59.325 "state": "online", 00:22:59.325 "raid_level": "raid5f", 00:22:59.325 "superblock": true, 00:22:59.325 "num_base_bdevs": 3, 00:22:59.325 "num_base_bdevs_discovered": 3, 00:22:59.325 "num_base_bdevs_operational": 3, 00:22:59.325 "process": { 00:22:59.325 "type": "rebuild", 00:22:59.325 "target": "spare", 00:22:59.325 "progress": { 00:22:59.325 "blocks": 20480, 00:22:59.325 "percent": 16 00:22:59.325 } 00:22:59.325 }, 00:22:59.325 "base_bdevs_list": [ 00:22:59.325 { 00:22:59.325 "name": "spare", 00:22:59.325 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:22:59.325 "is_configured": true, 00:22:59.325 "data_offset": 2048, 00:22:59.325 "data_size": 63488 00:22:59.325 }, 00:22:59.325 { 00:22:59.325 "name": "BaseBdev2", 00:22:59.325 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:22:59.325 "is_configured": true, 00:22:59.325 "data_offset": 2048, 00:22:59.325 "data_size": 63488 00:22:59.325 }, 00:22:59.325 { 00:22:59.325 "name": "BaseBdev3", 00:22:59.325 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:22:59.325 "is_configured": true, 00:22:59.325 "data_offset": 2048, 00:22:59.325 "data_size": 63488 00:22:59.325 } 00:22:59.325 ] 00:22:59.325 }' 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.325 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.325 [2024-11-20 05:33:31.118422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.584 [2024-11-20 05:33:31.215529] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:59.584 [2024-11-20 05:33:31.215703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.584 [2024-11-20 05:33:31.215766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.584 [2024-11-20 05:33:31.215787] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.584 "name": "raid_bdev1", 00:22:59.584 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:22:59.584 "strip_size_kb": 64, 00:22:59.584 "state": "online", 00:22:59.584 "raid_level": "raid5f", 00:22:59.584 "superblock": true, 00:22:59.584 "num_base_bdevs": 3, 00:22:59.584 "num_base_bdevs_discovered": 2, 00:22:59.584 "num_base_bdevs_operational": 2, 00:22:59.584 "base_bdevs_list": [ 00:22:59.584 { 00:22:59.584 "name": null, 00:22:59.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.584 "is_configured": false, 00:22:59.584 "data_offset": 0, 00:22:59.584 "data_size": 63488 00:22:59.584 }, 00:22:59.584 { 00:22:59.584 "name": "BaseBdev2", 00:22:59.584 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:22:59.584 "is_configured": true, 00:22:59.584 "data_offset": 2048, 00:22:59.584 "data_size": 63488 00:22:59.584 }, 00:22:59.584 { 00:22:59.584 "name": "BaseBdev3", 00:22:59.584 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:22:59.584 "is_configured": true, 00:22:59.584 "data_offset": 2048, 00:22:59.584 "data_size": 63488 00:22:59.584 } 00:22:59.584 ] 00:22:59.584 }' 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.584 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.842 "name": "raid_bdev1", 00:22:59.842 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:22:59.842 "strip_size_kb": 64, 00:22:59.842 "state": "online", 00:22:59.842 "raid_level": "raid5f", 00:22:59.842 "superblock": true, 00:22:59.842 "num_base_bdevs": 3, 00:22:59.842 "num_base_bdevs_discovered": 2, 00:22:59.842 "num_base_bdevs_operational": 2, 00:22:59.842 "base_bdevs_list": [ 00:22:59.842 { 00:22:59.842 "name": null, 00:22:59.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.842 "is_configured": false, 00:22:59.842 "data_offset": 0, 00:22:59.842 "data_size": 63488 00:22:59.842 }, 00:22:59.842 { 00:22:59.842 "name": "BaseBdev2", 00:22:59.842 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:22:59.842 "is_configured": true, 00:22:59.842 "data_offset": 2048, 00:22:59.842 "data_size": 63488 00:22:59.842 }, 00:22:59.842 { 00:22:59.842 "name": "BaseBdev3", 00:22:59.842 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:22:59.842 "is_configured": true, 00:22:59.842 "data_offset": 2048, 00:22:59.842 "data_size": 63488 00:22:59.842 } 00:22:59.842 ] 00:22:59.842 }' 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.842 [2024-11-20 05:33:31.654303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:59.842 [2024-11-20 05:33:31.662707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.842 05:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:59.842 [2024-11-20 05:33:31.667186] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.218 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.218 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.219 "name": "raid_bdev1", 00:23:01.219 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:01.219 "strip_size_kb": 64, 00:23:01.219 "state": "online", 00:23:01.219 "raid_level": "raid5f", 00:23:01.219 "superblock": true, 00:23:01.219 "num_base_bdevs": 3, 00:23:01.219 "num_base_bdevs_discovered": 3, 00:23:01.219 "num_base_bdevs_operational": 3, 00:23:01.219 "process": { 00:23:01.219 "type": "rebuild", 00:23:01.219 "target": "spare", 00:23:01.219 "progress": { 00:23:01.219 "blocks": 20480, 00:23:01.219 "percent": 16 00:23:01.219 } 00:23:01.219 }, 00:23:01.219 "base_bdevs_list": [ 00:23:01.219 { 00:23:01.219 "name": "spare", 00:23:01.219 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:01.219 "is_configured": true, 00:23:01.219 "data_offset": 2048, 00:23:01.219 "data_size": 63488 00:23:01.219 }, 00:23:01.219 { 00:23:01.219 "name": "BaseBdev2", 00:23:01.219 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:01.219 "is_configured": true, 00:23:01.219 "data_offset": 2048, 00:23:01.219 "data_size": 63488 00:23:01.219 }, 00:23:01.219 { 00:23:01.219 "name": "BaseBdev3", 00:23:01.219 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:01.219 "is_configured": true, 00:23:01.219 "data_offset": 2048, 00:23:01.219 "data_size": 63488 00:23:01.219 } 00:23:01.219 ] 00:23:01.219 }' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:01.219 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=447 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.219 "name": "raid_bdev1", 00:23:01.219 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:01.219 "strip_size_kb": 64, 00:23:01.219 "state": "online", 00:23:01.219 "raid_level": "raid5f", 00:23:01.219 "superblock": true, 00:23:01.219 "num_base_bdevs": 3, 00:23:01.219 "num_base_bdevs_discovered": 3, 00:23:01.219 "num_base_bdevs_operational": 3, 00:23:01.219 "process": { 00:23:01.219 "type": "rebuild", 00:23:01.219 "target": "spare", 00:23:01.219 "progress": { 00:23:01.219 "blocks": 22528, 00:23:01.219 "percent": 17 00:23:01.219 } 00:23:01.219 }, 00:23:01.219 "base_bdevs_list": [ 00:23:01.219 { 00:23:01.219 "name": "spare", 00:23:01.219 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:01.219 "is_configured": true, 00:23:01.219 "data_offset": 2048, 00:23:01.219 "data_size": 63488 00:23:01.219 }, 00:23:01.219 { 00:23:01.219 "name": "BaseBdev2", 00:23:01.219 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:01.219 "is_configured": true, 00:23:01.219 "data_offset": 2048, 00:23:01.219 "data_size": 63488 00:23:01.219 }, 00:23:01.219 { 00:23:01.219 "name": "BaseBdev3", 00:23:01.219 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:01.219 "is_configured": true, 00:23:01.219 "data_offset": 2048, 00:23:01.219 "data_size": 63488 00:23:01.219 } 00:23:01.219 ] 00:23:01.219 }' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.219 05:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:02.152 "name": "raid_bdev1", 00:23:02.152 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:02.152 "strip_size_kb": 64, 00:23:02.152 "state": "online", 00:23:02.152 "raid_level": "raid5f", 00:23:02.152 "superblock": true, 00:23:02.152 "num_base_bdevs": 3, 00:23:02.152 "num_base_bdevs_discovered": 3, 00:23:02.152 "num_base_bdevs_operational": 3, 00:23:02.152 "process": { 00:23:02.152 "type": "rebuild", 00:23:02.152 "target": "spare", 00:23:02.152 "progress": { 00:23:02.152 "blocks": 45056, 00:23:02.152 "percent": 35 00:23:02.152 } 00:23:02.152 }, 00:23:02.152 "base_bdevs_list": [ 00:23:02.152 { 00:23:02.152 "name": "spare", 00:23:02.152 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:02.152 "is_configured": true, 00:23:02.152 "data_offset": 2048, 00:23:02.152 "data_size": 63488 00:23:02.152 }, 00:23:02.152 { 00:23:02.152 "name": "BaseBdev2", 00:23:02.152 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:02.152 "is_configured": true, 00:23:02.152 "data_offset": 2048, 00:23:02.152 "data_size": 63488 00:23:02.152 }, 00:23:02.152 { 00:23:02.152 "name": "BaseBdev3", 00:23:02.152 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:02.152 "is_configured": true, 00:23:02.152 "data_offset": 2048, 00:23:02.152 "data_size": 63488 00:23:02.152 } 00:23:02.152 ] 00:23:02.152 }' 00:23:02.152 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:02.410 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:02.410 05:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:02.410 05:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:02.410 05:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.343 "name": "raid_bdev1", 00:23:03.343 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:03.343 "strip_size_kb": 64, 00:23:03.343 "state": "online", 00:23:03.343 "raid_level": "raid5f", 00:23:03.343 "superblock": true, 00:23:03.343 "num_base_bdevs": 3, 00:23:03.343 "num_base_bdevs_discovered": 3, 00:23:03.343 "num_base_bdevs_operational": 3, 00:23:03.343 "process": { 00:23:03.343 "type": "rebuild", 00:23:03.343 "target": "spare", 00:23:03.343 "progress": { 00:23:03.343 "blocks": 67584, 00:23:03.343 "percent": 53 00:23:03.343 } 00:23:03.343 }, 00:23:03.343 "base_bdevs_list": [ 00:23:03.343 { 00:23:03.343 "name": "spare", 00:23:03.343 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:03.343 "is_configured": true, 00:23:03.343 "data_offset": 2048, 00:23:03.343 "data_size": 63488 00:23:03.343 }, 00:23:03.343 { 00:23:03.343 "name": "BaseBdev2", 00:23:03.343 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:03.343 "is_configured": true, 00:23:03.343 "data_offset": 2048, 00:23:03.343 "data_size": 63488 00:23:03.343 }, 00:23:03.343 { 00:23:03.343 "name": "BaseBdev3", 00:23:03.343 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:03.343 "is_configured": true, 00:23:03.343 "data_offset": 2048, 00:23:03.343 "data_size": 63488 00:23:03.343 } 00:23:03.343 ] 00:23:03.343 }' 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.343 05:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:04.717 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:04.717 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.717 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.717 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:04.717 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:04.717 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.717 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.717 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.718 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.718 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.718 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.718 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.718 "name": "raid_bdev1", 00:23:04.718 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:04.718 "strip_size_kb": 64, 00:23:04.718 "state": "online", 00:23:04.718 "raid_level": "raid5f", 00:23:04.718 "superblock": true, 00:23:04.718 "num_base_bdevs": 3, 00:23:04.718 "num_base_bdevs_discovered": 3, 00:23:04.718 "num_base_bdevs_operational": 3, 00:23:04.718 "process": { 00:23:04.718 "type": "rebuild", 00:23:04.718 "target": "spare", 00:23:04.718 "progress": { 00:23:04.718 "blocks": 90112, 00:23:04.718 "percent": 70 00:23:04.718 } 00:23:04.718 }, 00:23:04.718 "base_bdevs_list": [ 00:23:04.718 { 00:23:04.718 "name": "spare", 00:23:04.718 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:04.718 "is_configured": true, 00:23:04.718 "data_offset": 2048, 00:23:04.718 "data_size": 63488 00:23:04.718 }, 00:23:04.718 { 00:23:04.718 "name": "BaseBdev2", 00:23:04.718 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:04.718 "is_configured": true, 00:23:04.718 "data_offset": 2048, 00:23:04.718 "data_size": 63488 00:23:04.718 }, 00:23:04.718 { 00:23:04.718 "name": "BaseBdev3", 00:23:04.718 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:04.718 "is_configured": true, 00:23:04.718 "data_offset": 2048, 00:23:04.718 "data_size": 63488 00:23:04.718 } 00:23:04.718 ] 00:23:04.718 }' 00:23:04.718 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.718 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.718 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.718 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.718 05:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:05.651 "name": "raid_bdev1", 00:23:05.651 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:05.651 "strip_size_kb": 64, 00:23:05.651 "state": "online", 00:23:05.651 "raid_level": "raid5f", 00:23:05.651 "superblock": true, 00:23:05.651 "num_base_bdevs": 3, 00:23:05.651 "num_base_bdevs_discovered": 3, 00:23:05.651 "num_base_bdevs_operational": 3, 00:23:05.651 "process": { 00:23:05.651 "type": "rebuild", 00:23:05.651 "target": "spare", 00:23:05.651 "progress": { 00:23:05.651 "blocks": 112640, 00:23:05.651 "percent": 88 00:23:05.651 } 00:23:05.651 }, 00:23:05.651 "base_bdevs_list": [ 00:23:05.651 { 00:23:05.651 "name": "spare", 00:23:05.651 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:05.651 "is_configured": true, 00:23:05.651 "data_offset": 2048, 00:23:05.651 "data_size": 63488 00:23:05.651 }, 00:23:05.651 { 00:23:05.651 "name": "BaseBdev2", 00:23:05.651 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:05.651 "is_configured": true, 00:23:05.651 "data_offset": 2048, 00:23:05.651 "data_size": 63488 00:23:05.651 }, 00:23:05.651 { 00:23:05.651 "name": "BaseBdev3", 00:23:05.651 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:05.651 "is_configured": true, 00:23:05.651 "data_offset": 2048, 00:23:05.651 "data_size": 63488 00:23:05.651 } 00:23:05.651 ] 00:23:05.651 }' 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:05.651 05:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:06.217 [2024-11-20 05:33:37.917721] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:06.217 [2024-11-20 05:33:37.917953] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:06.217 [2024-11-20 05:33:37.918063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.784 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:06.784 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.784 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.785 "name": "raid_bdev1", 00:23:06.785 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:06.785 "strip_size_kb": 64, 00:23:06.785 "state": "online", 00:23:06.785 "raid_level": "raid5f", 00:23:06.785 "superblock": true, 00:23:06.785 "num_base_bdevs": 3, 00:23:06.785 "num_base_bdevs_discovered": 3, 00:23:06.785 "num_base_bdevs_operational": 3, 00:23:06.785 "base_bdevs_list": [ 00:23:06.785 { 00:23:06.785 "name": "spare", 00:23:06.785 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 2048, 00:23:06.785 "data_size": 63488 00:23:06.785 }, 00:23:06.785 { 00:23:06.785 "name": "BaseBdev2", 00:23:06.785 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 2048, 00:23:06.785 "data_size": 63488 00:23:06.785 }, 00:23:06.785 { 00:23:06.785 "name": "BaseBdev3", 00:23:06.785 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 2048, 00:23:06.785 "data_size": 63488 00:23:06.785 } 00:23:06.785 ] 00:23:06.785 }' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.785 "name": "raid_bdev1", 00:23:06.785 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:06.785 "strip_size_kb": 64, 00:23:06.785 "state": "online", 00:23:06.785 "raid_level": "raid5f", 00:23:06.785 "superblock": true, 00:23:06.785 "num_base_bdevs": 3, 00:23:06.785 "num_base_bdevs_discovered": 3, 00:23:06.785 "num_base_bdevs_operational": 3, 00:23:06.785 "base_bdevs_list": [ 00:23:06.785 { 00:23:06.785 "name": "spare", 00:23:06.785 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 2048, 00:23:06.785 "data_size": 63488 00:23:06.785 }, 00:23:06.785 { 00:23:06.785 "name": "BaseBdev2", 00:23:06.785 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 2048, 00:23:06.785 "data_size": 63488 00:23:06.785 }, 00:23:06.785 { 00:23:06.785 "name": "BaseBdev3", 00:23:06.785 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 2048, 00:23:06.785 "data_size": 63488 00:23:06.785 } 00:23:06.785 ] 00:23:06.785 }' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.785 "name": "raid_bdev1", 00:23:06.785 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:06.785 "strip_size_kb": 64, 00:23:06.785 "state": "online", 00:23:06.785 "raid_level": "raid5f", 00:23:06.785 "superblock": true, 00:23:06.785 "num_base_bdevs": 3, 00:23:06.785 "num_base_bdevs_discovered": 3, 00:23:06.785 "num_base_bdevs_operational": 3, 00:23:06.785 "base_bdevs_list": [ 00:23:06.785 { 00:23:06.785 "name": "spare", 00:23:06.785 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 2048, 00:23:06.785 "data_size": 63488 00:23:06.785 }, 00:23:06.785 { 00:23:06.785 "name": "BaseBdev2", 00:23:06.785 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 2048, 00:23:06.785 "data_size": 63488 00:23:06.785 }, 00:23:06.785 { 00:23:06.785 "name": "BaseBdev3", 00:23:06.785 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 2048, 00:23:06.785 "data_size": 63488 00:23:06.785 } 00:23:06.785 ] 00:23:06.785 }' 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.785 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.385 [2024-11-20 05:33:38.904551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:07.385 [2024-11-20 05:33:38.904698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:07.385 [2024-11-20 05:33:38.904778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.385 [2024-11-20 05:33:38.904851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:07.385 [2024-11-20 05:33:38.904864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:07.385 05:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:07.385 /dev/nbd0 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:07.385 1+0 records in 00:23:07.385 1+0 records out 00:23:07.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211492 s, 19.4 MB/s 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:07.385 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:07.647 /dev/nbd1 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:07.647 1+0 records in 00:23:07.647 1+0 records out 00:23:07.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341335 s, 12.0 MB/s 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:07.647 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:07.907 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:07.907 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:07.907 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:07.907 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:07.907 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:07.907 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.907 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.166 [2024-11-20 05:33:39.985337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:08.166 [2024-11-20 05:33:39.985405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.166 [2024-11-20 05:33:39.985424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:08.166 [2024-11-20 05:33:39.985434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.166 [2024-11-20 05:33:39.987358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.166 [2024-11-20 05:33:39.987401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:08.166 [2024-11-20 05:33:39.987478] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:08.166 [2024-11-20 05:33:39.987520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:08.166 [2024-11-20 05:33:39.987633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.166 [2024-11-20 05:33:39.987716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:08.166 spare 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.166 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:08.167 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.167 05:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.424 [2024-11-20 05:33:40.087790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:08.424 [2024-11-20 05:33:40.087839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:08.424 [2024-11-20 05:33:40.088116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:23:08.424 [2024-11-20 05:33:40.091004] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:08.424 [2024-11-20 05:33:40.091026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:08.424 [2024-11-20 05:33:40.091196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.424 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.425 "name": "raid_bdev1", 00:23:08.425 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:08.425 "strip_size_kb": 64, 00:23:08.425 "state": "online", 00:23:08.425 "raid_level": "raid5f", 00:23:08.425 "superblock": true, 00:23:08.425 "num_base_bdevs": 3, 00:23:08.425 "num_base_bdevs_discovered": 3, 00:23:08.425 "num_base_bdevs_operational": 3, 00:23:08.425 "base_bdevs_list": [ 00:23:08.425 { 00:23:08.425 "name": "spare", 00:23:08.425 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:08.425 "is_configured": true, 00:23:08.425 "data_offset": 2048, 00:23:08.425 "data_size": 63488 00:23:08.425 }, 00:23:08.425 { 00:23:08.425 "name": "BaseBdev2", 00:23:08.425 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:08.425 "is_configured": true, 00:23:08.425 "data_offset": 2048, 00:23:08.425 "data_size": 63488 00:23:08.425 }, 00:23:08.425 { 00:23:08.425 "name": "BaseBdev3", 00:23:08.425 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:08.425 "is_configured": true, 00:23:08.425 "data_offset": 2048, 00:23:08.425 "data_size": 63488 00:23:08.425 } 00:23:08.425 ] 00:23:08.425 }' 00:23:08.425 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.425 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.684 "name": "raid_bdev1", 00:23:08.684 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:08.684 "strip_size_kb": 64, 00:23:08.684 "state": "online", 00:23:08.684 "raid_level": "raid5f", 00:23:08.684 "superblock": true, 00:23:08.684 "num_base_bdevs": 3, 00:23:08.684 "num_base_bdevs_discovered": 3, 00:23:08.684 "num_base_bdevs_operational": 3, 00:23:08.684 "base_bdevs_list": [ 00:23:08.684 { 00:23:08.684 "name": "spare", 00:23:08.684 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:08.684 "is_configured": true, 00:23:08.684 "data_offset": 2048, 00:23:08.684 "data_size": 63488 00:23:08.684 }, 00:23:08.684 { 00:23:08.684 "name": "BaseBdev2", 00:23:08.684 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:08.684 "is_configured": true, 00:23:08.684 "data_offset": 2048, 00:23:08.684 "data_size": 63488 00:23:08.684 }, 00:23:08.684 { 00:23:08.684 "name": "BaseBdev3", 00:23:08.684 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:08.684 "is_configured": true, 00:23:08.684 "data_offset": 2048, 00:23:08.684 "data_size": 63488 00:23:08.684 } 00:23:08.684 ] 00:23:08.684 }' 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:08.684 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.942 [2024-11-20 05:33:40.531246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.942 "name": "raid_bdev1", 00:23:08.942 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:08.942 "strip_size_kb": 64, 00:23:08.942 "state": "online", 00:23:08.942 "raid_level": "raid5f", 00:23:08.942 "superblock": true, 00:23:08.942 "num_base_bdevs": 3, 00:23:08.942 "num_base_bdevs_discovered": 2, 00:23:08.942 "num_base_bdevs_operational": 2, 00:23:08.942 "base_bdevs_list": [ 00:23:08.942 { 00:23:08.942 "name": null, 00:23:08.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.942 "is_configured": false, 00:23:08.942 "data_offset": 0, 00:23:08.942 "data_size": 63488 00:23:08.942 }, 00:23:08.942 { 00:23:08.942 "name": "BaseBdev2", 00:23:08.942 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:08.942 "is_configured": true, 00:23:08.942 "data_offset": 2048, 00:23:08.942 "data_size": 63488 00:23:08.942 }, 00:23:08.942 { 00:23:08.942 "name": "BaseBdev3", 00:23:08.942 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:08.942 "is_configured": true, 00:23:08.942 "data_offset": 2048, 00:23:08.942 "data_size": 63488 00:23:08.942 } 00:23:08.942 ] 00:23:08.942 }' 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.942 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.200 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:09.200 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.200 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.200 [2024-11-20 05:33:40.867327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:09.200 [2024-11-20 05:33:40.867483] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:09.201 [2024-11-20 05:33:40.867503] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:09.201 [2024-11-20 05:33:40.867529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:09.201 [2024-11-20 05:33:40.875872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:23:09.201 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.201 05:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:09.201 [2024-11-20 05:33:40.880145] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.136 "name": "raid_bdev1", 00:23:10.136 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:10.136 "strip_size_kb": 64, 00:23:10.136 "state": "online", 00:23:10.136 "raid_level": "raid5f", 00:23:10.136 "superblock": true, 00:23:10.136 "num_base_bdevs": 3, 00:23:10.136 "num_base_bdevs_discovered": 3, 00:23:10.136 "num_base_bdevs_operational": 3, 00:23:10.136 "process": { 00:23:10.136 "type": "rebuild", 00:23:10.136 "target": "spare", 00:23:10.136 "progress": { 00:23:10.136 "blocks": 20480, 00:23:10.136 "percent": 16 00:23:10.136 } 00:23:10.136 }, 00:23:10.136 "base_bdevs_list": [ 00:23:10.136 { 00:23:10.136 "name": "spare", 00:23:10.136 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:10.136 "is_configured": true, 00:23:10.136 "data_offset": 2048, 00:23:10.136 "data_size": 63488 00:23:10.136 }, 00:23:10.136 { 00:23:10.136 "name": "BaseBdev2", 00:23:10.136 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:10.136 "is_configured": true, 00:23:10.136 "data_offset": 2048, 00:23:10.136 "data_size": 63488 00:23:10.136 }, 00:23:10.136 { 00:23:10.136 "name": "BaseBdev3", 00:23:10.136 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:10.136 "is_configured": true, 00:23:10.136 "data_offset": 2048, 00:23:10.136 "data_size": 63488 00:23:10.136 } 00:23:10.136 ] 00:23:10.136 }' 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:10.136 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:10.437 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:10.437 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:10.437 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.437 05:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.437 [2024-11-20 05:33:41.981160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:10.437 [2024-11-20 05:33:41.989059] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:10.437 [2024-11-20 05:33:41.989115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.437 [2024-11-20 05:33:41.989129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:10.437 [2024-11-20 05:33:41.989137] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.437 "name": "raid_bdev1", 00:23:10.437 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:10.437 "strip_size_kb": 64, 00:23:10.437 "state": "online", 00:23:10.437 "raid_level": "raid5f", 00:23:10.437 "superblock": true, 00:23:10.437 "num_base_bdevs": 3, 00:23:10.437 "num_base_bdevs_discovered": 2, 00:23:10.437 "num_base_bdevs_operational": 2, 00:23:10.437 "base_bdevs_list": [ 00:23:10.437 { 00:23:10.437 "name": null, 00:23:10.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.437 "is_configured": false, 00:23:10.437 "data_offset": 0, 00:23:10.437 "data_size": 63488 00:23:10.437 }, 00:23:10.437 { 00:23:10.437 "name": "BaseBdev2", 00:23:10.437 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:10.437 "is_configured": true, 00:23:10.437 "data_offset": 2048, 00:23:10.437 "data_size": 63488 00:23:10.437 }, 00:23:10.437 { 00:23:10.437 "name": "BaseBdev3", 00:23:10.437 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:10.437 "is_configured": true, 00:23:10.437 "data_offset": 2048, 00:23:10.437 "data_size": 63488 00:23:10.437 } 00:23:10.437 ] 00:23:10.437 }' 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.437 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.697 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:10.697 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.697 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.697 [2024-11-20 05:33:42.315334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:10.697 [2024-11-20 05:33:42.315402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.697 [2024-11-20 05:33:42.315419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:10.697 [2024-11-20 05:33:42.315431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.697 [2024-11-20 05:33:42.315801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.697 [2024-11-20 05:33:42.315822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:10.697 [2024-11-20 05:33:42.315894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:10.697 [2024-11-20 05:33:42.315911] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:10.697 [2024-11-20 05:33:42.315919] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:10.697 [2024-11-20 05:33:42.315935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:10.697 [2024-11-20 05:33:42.324257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:23:10.697 spare 00:23:10.697 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.697 05:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:10.697 [2024-11-20 05:33:42.328676] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:11.631 "name": "raid_bdev1", 00:23:11.631 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:11.631 "strip_size_kb": 64, 00:23:11.631 "state": "online", 00:23:11.631 "raid_level": "raid5f", 00:23:11.631 "superblock": true, 00:23:11.631 "num_base_bdevs": 3, 00:23:11.631 "num_base_bdevs_discovered": 3, 00:23:11.631 "num_base_bdevs_operational": 3, 00:23:11.631 "process": { 00:23:11.631 "type": "rebuild", 00:23:11.631 "target": "spare", 00:23:11.631 "progress": { 00:23:11.631 "blocks": 20480, 00:23:11.631 "percent": 16 00:23:11.631 } 00:23:11.631 }, 00:23:11.631 "base_bdevs_list": [ 00:23:11.631 { 00:23:11.631 "name": "spare", 00:23:11.631 "uuid": "009224b5-0f74-5f5e-b845-63e0dedd358d", 00:23:11.631 "is_configured": true, 00:23:11.631 "data_offset": 2048, 00:23:11.631 "data_size": 63488 00:23:11.631 }, 00:23:11.631 { 00:23:11.631 "name": "BaseBdev2", 00:23:11.631 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:11.631 "is_configured": true, 00:23:11.631 "data_offset": 2048, 00:23:11.631 "data_size": 63488 00:23:11.631 }, 00:23:11.631 { 00:23:11.631 "name": "BaseBdev3", 00:23:11.631 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:11.631 "is_configured": true, 00:23:11.631 "data_offset": 2048, 00:23:11.631 "data_size": 63488 00:23:11.631 } 00:23:11.631 ] 00:23:11.631 }' 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.631 [2024-11-20 05:33:43.429613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:11.631 [2024-11-20 05:33:43.437529] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:11.631 [2024-11-20 05:33:43.437577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.631 [2024-11-20 05:33:43.437591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:11.631 [2024-11-20 05:33:43.437597] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.631 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.890 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.890 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.890 "name": "raid_bdev1", 00:23:11.890 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:11.890 "strip_size_kb": 64, 00:23:11.890 "state": "online", 00:23:11.890 "raid_level": "raid5f", 00:23:11.890 "superblock": true, 00:23:11.890 "num_base_bdevs": 3, 00:23:11.890 "num_base_bdevs_discovered": 2, 00:23:11.890 "num_base_bdevs_operational": 2, 00:23:11.890 "base_bdevs_list": [ 00:23:11.890 { 00:23:11.890 "name": null, 00:23:11.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.890 "is_configured": false, 00:23:11.890 "data_offset": 0, 00:23:11.890 "data_size": 63488 00:23:11.890 }, 00:23:11.890 { 00:23:11.890 "name": "BaseBdev2", 00:23:11.890 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:11.890 "is_configured": true, 00:23:11.890 "data_offset": 2048, 00:23:11.890 "data_size": 63488 00:23:11.890 }, 00:23:11.890 { 00:23:11.890 "name": "BaseBdev3", 00:23:11.890 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:11.890 "is_configured": true, 00:23:11.890 "data_offset": 2048, 00:23:11.890 "data_size": 63488 00:23:11.890 } 00:23:11.890 ] 00:23:11.890 }' 00:23:11.890 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.890 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:12.149 "name": "raid_bdev1", 00:23:12.149 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:12.149 "strip_size_kb": 64, 00:23:12.149 "state": "online", 00:23:12.149 "raid_level": "raid5f", 00:23:12.149 "superblock": true, 00:23:12.149 "num_base_bdevs": 3, 00:23:12.149 "num_base_bdevs_discovered": 2, 00:23:12.149 "num_base_bdevs_operational": 2, 00:23:12.149 "base_bdevs_list": [ 00:23:12.149 { 00:23:12.149 "name": null, 00:23:12.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.149 "is_configured": false, 00:23:12.149 "data_offset": 0, 00:23:12.149 "data_size": 63488 00:23:12.149 }, 00:23:12.149 { 00:23:12.149 "name": "BaseBdev2", 00:23:12.149 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:12.149 "is_configured": true, 00:23:12.149 "data_offset": 2048, 00:23:12.149 "data_size": 63488 00:23:12.149 }, 00:23:12.149 { 00:23:12.149 "name": "BaseBdev3", 00:23:12.149 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:12.149 "is_configured": true, 00:23:12.149 "data_offset": 2048, 00:23:12.149 "data_size": 63488 00:23:12.149 } 00:23:12.149 ] 00:23:12.149 }' 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.149 [2024-11-20 05:33:43.839515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:12.149 [2024-11-20 05:33:43.839565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.149 [2024-11-20 05:33:43.839585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:12.149 [2024-11-20 05:33:43.839592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.149 [2024-11-20 05:33:43.839967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.149 [2024-11-20 05:33:43.839986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:12.149 [2024-11-20 05:33:43.840050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:12.149 [2024-11-20 05:33:43.840066] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:12.149 [2024-11-20 05:33:43.840076] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:12.149 [2024-11-20 05:33:43.840084] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:12.149 BaseBdev1 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.149 05:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.083 "name": "raid_bdev1", 00:23:13.083 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:13.083 "strip_size_kb": 64, 00:23:13.083 "state": "online", 00:23:13.083 "raid_level": "raid5f", 00:23:13.083 "superblock": true, 00:23:13.083 "num_base_bdevs": 3, 00:23:13.083 "num_base_bdevs_discovered": 2, 00:23:13.083 "num_base_bdevs_operational": 2, 00:23:13.083 "base_bdevs_list": [ 00:23:13.083 { 00:23:13.083 "name": null, 00:23:13.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.083 "is_configured": false, 00:23:13.083 "data_offset": 0, 00:23:13.083 "data_size": 63488 00:23:13.083 }, 00:23:13.083 { 00:23:13.083 "name": "BaseBdev2", 00:23:13.083 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:13.083 "is_configured": true, 00:23:13.083 "data_offset": 2048, 00:23:13.083 "data_size": 63488 00:23:13.083 }, 00:23:13.083 { 00:23:13.083 "name": "BaseBdev3", 00:23:13.083 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:13.083 "is_configured": true, 00:23:13.083 "data_offset": 2048, 00:23:13.083 "data_size": 63488 00:23:13.083 } 00:23:13.083 ] 00:23:13.083 }' 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.083 05:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:13.649 "name": "raid_bdev1", 00:23:13.649 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:13.649 "strip_size_kb": 64, 00:23:13.649 "state": "online", 00:23:13.649 "raid_level": "raid5f", 00:23:13.649 "superblock": true, 00:23:13.649 "num_base_bdevs": 3, 00:23:13.649 "num_base_bdevs_discovered": 2, 00:23:13.649 "num_base_bdevs_operational": 2, 00:23:13.649 "base_bdevs_list": [ 00:23:13.649 { 00:23:13.649 "name": null, 00:23:13.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.649 "is_configured": false, 00:23:13.649 "data_offset": 0, 00:23:13.649 "data_size": 63488 00:23:13.649 }, 00:23:13.649 { 00:23:13.649 "name": "BaseBdev2", 00:23:13.649 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:13.649 "is_configured": true, 00:23:13.649 "data_offset": 2048, 00:23:13.649 "data_size": 63488 00:23:13.649 }, 00:23:13.649 { 00:23:13.649 "name": "BaseBdev3", 00:23:13.649 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:13.649 "is_configured": true, 00:23:13.649 "data_offset": 2048, 00:23:13.649 "data_size": 63488 00:23:13.649 } 00:23:13.649 ] 00:23:13.649 }' 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.649 [2024-11-20 05:33:45.291840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:13.649 [2024-11-20 05:33:45.291969] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:13.649 [2024-11-20 05:33:45.291982] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:13.649 request: 00:23:13.649 { 00:23:13.649 "base_bdev": "BaseBdev1", 00:23:13.649 "raid_bdev": "raid_bdev1", 00:23:13.649 "method": "bdev_raid_add_base_bdev", 00:23:13.649 "req_id": 1 00:23:13.649 } 00:23:13.649 Got JSON-RPC error response 00:23:13.649 response: 00:23:13.649 { 00:23:13.649 "code": -22, 00:23:13.649 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:13.649 } 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:13.649 05:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.583 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.583 "name": "raid_bdev1", 00:23:14.583 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:14.583 "strip_size_kb": 64, 00:23:14.583 "state": "online", 00:23:14.583 "raid_level": "raid5f", 00:23:14.583 "superblock": true, 00:23:14.583 "num_base_bdevs": 3, 00:23:14.583 "num_base_bdevs_discovered": 2, 00:23:14.583 "num_base_bdevs_operational": 2, 00:23:14.583 "base_bdevs_list": [ 00:23:14.583 { 00:23:14.583 "name": null, 00:23:14.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.583 "is_configured": false, 00:23:14.583 "data_offset": 0, 00:23:14.583 "data_size": 63488 00:23:14.583 }, 00:23:14.583 { 00:23:14.583 "name": "BaseBdev2", 00:23:14.584 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:14.584 "is_configured": true, 00:23:14.584 "data_offset": 2048, 00:23:14.584 "data_size": 63488 00:23:14.584 }, 00:23:14.584 { 00:23:14.584 "name": "BaseBdev3", 00:23:14.584 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:14.584 "is_configured": true, 00:23:14.584 "data_offset": 2048, 00:23:14.584 "data_size": 63488 00:23:14.584 } 00:23:14.584 ] 00:23:14.584 }' 00:23:14.584 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.584 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:14.842 "name": "raid_bdev1", 00:23:14.842 "uuid": "99e9e38e-30fc-437b-88ec-f19b8f70a3c1", 00:23:14.842 "strip_size_kb": 64, 00:23:14.842 "state": "online", 00:23:14.842 "raid_level": "raid5f", 00:23:14.842 "superblock": true, 00:23:14.842 "num_base_bdevs": 3, 00:23:14.842 "num_base_bdevs_discovered": 2, 00:23:14.842 "num_base_bdevs_operational": 2, 00:23:14.842 "base_bdevs_list": [ 00:23:14.842 { 00:23:14.842 "name": null, 00:23:14.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.842 "is_configured": false, 00:23:14.842 "data_offset": 0, 00:23:14.842 "data_size": 63488 00:23:14.842 }, 00:23:14.842 { 00:23:14.842 "name": "BaseBdev2", 00:23:14.842 "uuid": "d208a23c-80bf-532d-9952-67de76f71167", 00:23:14.842 "is_configured": true, 00:23:14.842 "data_offset": 2048, 00:23:14.842 "data_size": 63488 00:23:14.842 }, 00:23:14.842 { 00:23:14.842 "name": "BaseBdev3", 00:23:14.842 "uuid": "7242c460-71ef-5664-b317-406a9f9ba115", 00:23:14.842 "is_configured": true, 00:23:14.842 "data_offset": 2048, 00:23:14.842 "data_size": 63488 00:23:14.842 } 00:23:14.842 ] 00:23:14.842 }' 00:23:14.842 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79786 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 79786 ']' 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 79786 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79786 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:15.100 killing process with pid 79786 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79786' 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 79786 00:23:15.100 Received shutdown signal, test time was about 60.000000 seconds 00:23:15.100 00:23:15.100 Latency(us) 00:23:15.100 [2024-11-20T05:33:46.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.100 [2024-11-20T05:33:46.935Z] =================================================================================================================== 00:23:15.100 [2024-11-20T05:33:46.935Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.100 [2024-11-20 05:33:46.734296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:15.100 05:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 79786 00:23:15.100 [2024-11-20 05:33:46.734412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:15.100 [2024-11-20 05:33:46.734465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:15.100 [2024-11-20 05:33:46.734476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:15.100 [2024-11-20 05:33:46.929477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:15.665 05:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:15.665 00:23:15.665 real 0m19.891s 00:23:15.665 user 0m24.858s 00:23:15.665 sys 0m1.877s 00:23:15.665 05:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.665 05:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.665 ************************************ 00:23:15.665 END TEST raid5f_rebuild_test_sb 00:23:15.665 ************************************ 00:23:15.923 05:33:47 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:23:15.923 05:33:47 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:23:15.923 05:33:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:15.923 05:33:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.923 05:33:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 ************************************ 00:23:15.923 START TEST raid5f_state_function_test 00:23:15.923 ************************************ 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80499 00:23:15.923 Process raid pid: 80499 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80499' 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80499 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80499 ']' 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.923 05:33:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.923 [2024-11-20 05:33:47.606670] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:15.923 [2024-11-20 05:33:47.606791] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.181 [2024-11-20 05:33:47.774942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.181 [2024-11-20 05:33:47.860373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.181 [2024-11-20 05:33:47.973757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:16.181 [2024-11-20 05:33:47.973796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.746 [2024-11-20 05:33:48.426164] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:16.746 [2024-11-20 05:33:48.426213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:16.746 [2024-11-20 05:33:48.426221] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:16.746 [2024-11-20 05:33:48.426229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:16.746 [2024-11-20 05:33:48.426234] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:16.746 [2024-11-20 05:33:48.426240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:16.746 [2024-11-20 05:33:48.426245] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:16.746 [2024-11-20 05:33:48.426252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.746 "name": "Existed_Raid", 00:23:16.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.746 "strip_size_kb": 64, 00:23:16.746 "state": "configuring", 00:23:16.746 "raid_level": "raid5f", 00:23:16.746 "superblock": false, 00:23:16.746 "num_base_bdevs": 4, 00:23:16.746 "num_base_bdevs_discovered": 0, 00:23:16.746 "num_base_bdevs_operational": 4, 00:23:16.746 "base_bdevs_list": [ 00:23:16.746 { 00:23:16.746 "name": "BaseBdev1", 00:23:16.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.746 "is_configured": false, 00:23:16.746 "data_offset": 0, 00:23:16.746 "data_size": 0 00:23:16.746 }, 00:23:16.746 { 00:23:16.746 "name": "BaseBdev2", 00:23:16.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.746 "is_configured": false, 00:23:16.746 "data_offset": 0, 00:23:16.746 "data_size": 0 00:23:16.746 }, 00:23:16.746 { 00:23:16.746 "name": "BaseBdev3", 00:23:16.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.746 "is_configured": false, 00:23:16.746 "data_offset": 0, 00:23:16.746 "data_size": 0 00:23:16.746 }, 00:23:16.746 { 00:23:16.746 "name": "BaseBdev4", 00:23:16.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.746 "is_configured": false, 00:23:16.746 "data_offset": 0, 00:23:16.746 "data_size": 0 00:23:16.746 } 00:23:16.746 ] 00:23:16.746 }' 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.746 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.005 [2024-11-20 05:33:48.782187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.005 [2024-11-20 05:33:48.782226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.005 [2024-11-20 05:33:48.790192] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:17.005 [2024-11-20 05:33:48.790231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:17.005 [2024-11-20 05:33:48.790238] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:17.005 [2024-11-20 05:33:48.790245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:17.005 [2024-11-20 05:33:48.790250] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:17.005 [2024-11-20 05:33:48.790257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:17.005 [2024-11-20 05:33:48.790262] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:17.005 [2024-11-20 05:33:48.790269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.005 [2024-11-20 05:33:48.818761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.005 BaseBdev1 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.005 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.264 [ 00:23:17.264 { 00:23:17.264 "name": "BaseBdev1", 00:23:17.264 "aliases": [ 00:23:17.264 "ac447400-c360-42ff-a7f6-277be19bdeae" 00:23:17.264 ], 00:23:17.264 "product_name": "Malloc disk", 00:23:17.264 "block_size": 512, 00:23:17.264 "num_blocks": 65536, 00:23:17.264 "uuid": "ac447400-c360-42ff-a7f6-277be19bdeae", 00:23:17.264 "assigned_rate_limits": { 00:23:17.264 "rw_ios_per_sec": 0, 00:23:17.264 "rw_mbytes_per_sec": 0, 00:23:17.264 "r_mbytes_per_sec": 0, 00:23:17.264 "w_mbytes_per_sec": 0 00:23:17.264 }, 00:23:17.264 "claimed": true, 00:23:17.264 "claim_type": "exclusive_write", 00:23:17.264 "zoned": false, 00:23:17.264 "supported_io_types": { 00:23:17.264 "read": true, 00:23:17.264 "write": true, 00:23:17.264 "unmap": true, 00:23:17.264 "flush": true, 00:23:17.264 "reset": true, 00:23:17.264 "nvme_admin": false, 00:23:17.264 "nvme_io": false, 00:23:17.264 "nvme_io_md": false, 00:23:17.264 "write_zeroes": true, 00:23:17.264 "zcopy": true, 00:23:17.264 "get_zone_info": false, 00:23:17.264 "zone_management": false, 00:23:17.264 "zone_append": false, 00:23:17.264 "compare": false, 00:23:17.264 "compare_and_write": false, 00:23:17.264 "abort": true, 00:23:17.264 "seek_hole": false, 00:23:17.264 "seek_data": false, 00:23:17.264 "copy": true, 00:23:17.264 "nvme_iov_md": false 00:23:17.264 }, 00:23:17.264 "memory_domains": [ 00:23:17.264 { 00:23:17.264 "dma_device_id": "system", 00:23:17.264 "dma_device_type": 1 00:23:17.264 }, 00:23:17.264 { 00:23:17.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.264 "dma_device_type": 2 00:23:17.264 } 00:23:17.264 ], 00:23:17.264 "driver_specific": {} 00:23:17.264 } 00:23:17.264 ] 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.264 "name": "Existed_Raid", 00:23:17.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.264 "strip_size_kb": 64, 00:23:17.264 "state": "configuring", 00:23:17.264 "raid_level": "raid5f", 00:23:17.264 "superblock": false, 00:23:17.264 "num_base_bdevs": 4, 00:23:17.264 "num_base_bdevs_discovered": 1, 00:23:17.264 "num_base_bdevs_operational": 4, 00:23:17.264 "base_bdevs_list": [ 00:23:17.264 { 00:23:17.264 "name": "BaseBdev1", 00:23:17.264 "uuid": "ac447400-c360-42ff-a7f6-277be19bdeae", 00:23:17.264 "is_configured": true, 00:23:17.264 "data_offset": 0, 00:23:17.264 "data_size": 65536 00:23:17.264 }, 00:23:17.264 { 00:23:17.264 "name": "BaseBdev2", 00:23:17.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.264 "is_configured": false, 00:23:17.264 "data_offset": 0, 00:23:17.264 "data_size": 0 00:23:17.264 }, 00:23:17.264 { 00:23:17.264 "name": "BaseBdev3", 00:23:17.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.264 "is_configured": false, 00:23:17.264 "data_offset": 0, 00:23:17.264 "data_size": 0 00:23:17.264 }, 00:23:17.264 { 00:23:17.264 "name": "BaseBdev4", 00:23:17.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.264 "is_configured": false, 00:23:17.264 "data_offset": 0, 00:23:17.264 "data_size": 0 00:23:17.264 } 00:23:17.264 ] 00:23:17.264 }' 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.264 05:33:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.523 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:17.523 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.523 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.523 [2024-11-20 05:33:49.186864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.523 [2024-11-20 05:33:49.186912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:17.523 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.524 [2024-11-20 05:33:49.194921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.524 [2024-11-20 05:33:49.196505] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:17.524 [2024-11-20 05:33:49.196543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:17.524 [2024-11-20 05:33:49.196551] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:17.524 [2024-11-20 05:33:49.196560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:17.524 [2024-11-20 05:33:49.196565] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:17.524 [2024-11-20 05:33:49.196572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.524 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.524 "name": "Existed_Raid", 00:23:17.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.525 "strip_size_kb": 64, 00:23:17.525 "state": "configuring", 00:23:17.525 "raid_level": "raid5f", 00:23:17.525 "superblock": false, 00:23:17.525 "num_base_bdevs": 4, 00:23:17.525 "num_base_bdevs_discovered": 1, 00:23:17.525 "num_base_bdevs_operational": 4, 00:23:17.525 "base_bdevs_list": [ 00:23:17.525 { 00:23:17.525 "name": "BaseBdev1", 00:23:17.525 "uuid": "ac447400-c360-42ff-a7f6-277be19bdeae", 00:23:17.525 "is_configured": true, 00:23:17.525 "data_offset": 0, 00:23:17.525 "data_size": 65536 00:23:17.525 }, 00:23:17.525 { 00:23:17.525 "name": "BaseBdev2", 00:23:17.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.525 "is_configured": false, 00:23:17.525 "data_offset": 0, 00:23:17.525 "data_size": 0 00:23:17.525 }, 00:23:17.525 { 00:23:17.525 "name": "BaseBdev3", 00:23:17.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.525 "is_configured": false, 00:23:17.525 "data_offset": 0, 00:23:17.525 "data_size": 0 00:23:17.525 }, 00:23:17.525 { 00:23:17.525 "name": "BaseBdev4", 00:23:17.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.525 "is_configured": false, 00:23:17.525 "data_offset": 0, 00:23:17.525 "data_size": 0 00:23:17.525 } 00:23:17.525 ] 00:23:17.525 }' 00:23:17.525 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.525 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.785 [2024-11-20 05:33:49.601535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:17.785 BaseBdev2 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.785 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.043 [ 00:23:18.043 { 00:23:18.043 "name": "BaseBdev2", 00:23:18.043 "aliases": [ 00:23:18.043 "6183f6cc-da41-40b7-acd6-960cc162874a" 00:23:18.043 ], 00:23:18.043 "product_name": "Malloc disk", 00:23:18.043 "block_size": 512, 00:23:18.043 "num_blocks": 65536, 00:23:18.043 "uuid": "6183f6cc-da41-40b7-acd6-960cc162874a", 00:23:18.043 "assigned_rate_limits": { 00:23:18.043 "rw_ios_per_sec": 0, 00:23:18.043 "rw_mbytes_per_sec": 0, 00:23:18.043 "r_mbytes_per_sec": 0, 00:23:18.043 "w_mbytes_per_sec": 0 00:23:18.043 }, 00:23:18.043 "claimed": true, 00:23:18.043 "claim_type": "exclusive_write", 00:23:18.043 "zoned": false, 00:23:18.043 "supported_io_types": { 00:23:18.043 "read": true, 00:23:18.043 "write": true, 00:23:18.043 "unmap": true, 00:23:18.043 "flush": true, 00:23:18.043 "reset": true, 00:23:18.043 "nvme_admin": false, 00:23:18.043 "nvme_io": false, 00:23:18.043 "nvme_io_md": false, 00:23:18.043 "write_zeroes": true, 00:23:18.043 "zcopy": true, 00:23:18.043 "get_zone_info": false, 00:23:18.043 "zone_management": false, 00:23:18.043 "zone_append": false, 00:23:18.043 "compare": false, 00:23:18.043 "compare_and_write": false, 00:23:18.043 "abort": true, 00:23:18.043 "seek_hole": false, 00:23:18.043 "seek_data": false, 00:23:18.043 "copy": true, 00:23:18.043 "nvme_iov_md": false 00:23:18.043 }, 00:23:18.043 "memory_domains": [ 00:23:18.043 { 00:23:18.043 "dma_device_id": "system", 00:23:18.043 "dma_device_type": 1 00:23:18.043 }, 00:23:18.043 { 00:23:18.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.043 "dma_device_type": 2 00:23:18.043 } 00:23:18.043 ], 00:23:18.043 "driver_specific": {} 00:23:18.043 } 00:23:18.043 ] 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.043 "name": "Existed_Raid", 00:23:18.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.043 "strip_size_kb": 64, 00:23:18.043 "state": "configuring", 00:23:18.043 "raid_level": "raid5f", 00:23:18.043 "superblock": false, 00:23:18.043 "num_base_bdevs": 4, 00:23:18.043 "num_base_bdevs_discovered": 2, 00:23:18.043 "num_base_bdevs_operational": 4, 00:23:18.043 "base_bdevs_list": [ 00:23:18.043 { 00:23:18.043 "name": "BaseBdev1", 00:23:18.043 "uuid": "ac447400-c360-42ff-a7f6-277be19bdeae", 00:23:18.043 "is_configured": true, 00:23:18.043 "data_offset": 0, 00:23:18.043 "data_size": 65536 00:23:18.043 }, 00:23:18.043 { 00:23:18.043 "name": "BaseBdev2", 00:23:18.043 "uuid": "6183f6cc-da41-40b7-acd6-960cc162874a", 00:23:18.043 "is_configured": true, 00:23:18.043 "data_offset": 0, 00:23:18.043 "data_size": 65536 00:23:18.043 }, 00:23:18.043 { 00:23:18.043 "name": "BaseBdev3", 00:23:18.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.043 "is_configured": false, 00:23:18.043 "data_offset": 0, 00:23:18.043 "data_size": 0 00:23:18.043 }, 00:23:18.043 { 00:23:18.043 "name": "BaseBdev4", 00:23:18.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.043 "is_configured": false, 00:23:18.043 "data_offset": 0, 00:23:18.043 "data_size": 0 00:23:18.043 } 00:23:18.043 ] 00:23:18.043 }' 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.043 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.301 05:33:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:18.301 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.301 05:33:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.301 [2024-11-20 05:33:50.023835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:18.301 BaseBdev3 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.301 [ 00:23:18.301 { 00:23:18.301 "name": "BaseBdev3", 00:23:18.301 "aliases": [ 00:23:18.301 "5787acfb-0cac-4a00-b827-cd09f612f971" 00:23:18.301 ], 00:23:18.301 "product_name": "Malloc disk", 00:23:18.301 "block_size": 512, 00:23:18.301 "num_blocks": 65536, 00:23:18.301 "uuid": "5787acfb-0cac-4a00-b827-cd09f612f971", 00:23:18.301 "assigned_rate_limits": { 00:23:18.301 "rw_ios_per_sec": 0, 00:23:18.301 "rw_mbytes_per_sec": 0, 00:23:18.301 "r_mbytes_per_sec": 0, 00:23:18.301 "w_mbytes_per_sec": 0 00:23:18.301 }, 00:23:18.301 "claimed": true, 00:23:18.301 "claim_type": "exclusive_write", 00:23:18.301 "zoned": false, 00:23:18.301 "supported_io_types": { 00:23:18.301 "read": true, 00:23:18.301 "write": true, 00:23:18.301 "unmap": true, 00:23:18.301 "flush": true, 00:23:18.301 "reset": true, 00:23:18.301 "nvme_admin": false, 00:23:18.301 "nvme_io": false, 00:23:18.301 "nvme_io_md": false, 00:23:18.301 "write_zeroes": true, 00:23:18.301 "zcopy": true, 00:23:18.301 "get_zone_info": false, 00:23:18.301 "zone_management": false, 00:23:18.301 "zone_append": false, 00:23:18.301 "compare": false, 00:23:18.301 "compare_and_write": false, 00:23:18.301 "abort": true, 00:23:18.301 "seek_hole": false, 00:23:18.301 "seek_data": false, 00:23:18.301 "copy": true, 00:23:18.301 "nvme_iov_md": false 00:23:18.301 }, 00:23:18.301 "memory_domains": [ 00:23:18.301 { 00:23:18.301 "dma_device_id": "system", 00:23:18.301 "dma_device_type": 1 00:23:18.301 }, 00:23:18.301 { 00:23:18.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.301 "dma_device_type": 2 00:23:18.301 } 00:23:18.301 ], 00:23:18.301 "driver_specific": {} 00:23:18.301 } 00:23:18.301 ] 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.301 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.301 "name": "Existed_Raid", 00:23:18.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.301 "strip_size_kb": 64, 00:23:18.301 "state": "configuring", 00:23:18.301 "raid_level": "raid5f", 00:23:18.301 "superblock": false, 00:23:18.301 "num_base_bdevs": 4, 00:23:18.301 "num_base_bdevs_discovered": 3, 00:23:18.301 "num_base_bdevs_operational": 4, 00:23:18.301 "base_bdevs_list": [ 00:23:18.301 { 00:23:18.301 "name": "BaseBdev1", 00:23:18.301 "uuid": "ac447400-c360-42ff-a7f6-277be19bdeae", 00:23:18.301 "is_configured": true, 00:23:18.301 "data_offset": 0, 00:23:18.301 "data_size": 65536 00:23:18.301 }, 00:23:18.301 { 00:23:18.301 "name": "BaseBdev2", 00:23:18.302 "uuid": "6183f6cc-da41-40b7-acd6-960cc162874a", 00:23:18.302 "is_configured": true, 00:23:18.302 "data_offset": 0, 00:23:18.302 "data_size": 65536 00:23:18.302 }, 00:23:18.302 { 00:23:18.302 "name": "BaseBdev3", 00:23:18.302 "uuid": "5787acfb-0cac-4a00-b827-cd09f612f971", 00:23:18.302 "is_configured": true, 00:23:18.302 "data_offset": 0, 00:23:18.302 "data_size": 65536 00:23:18.302 }, 00:23:18.302 { 00:23:18.302 "name": "BaseBdev4", 00:23:18.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.302 "is_configured": false, 00:23:18.302 "data_offset": 0, 00:23:18.302 "data_size": 0 00:23:18.302 } 00:23:18.302 ] 00:23:18.302 }' 00:23:18.302 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.302 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.560 [2024-11-20 05:33:50.361947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:18.560 [2024-11-20 05:33:50.361998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:18.560 [2024-11-20 05:33:50.362005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:18.560 [2024-11-20 05:33:50.362205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:18.560 [2024-11-20 05:33:50.366152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:18.560 [2024-11-20 05:33:50.366176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:18.560 [2024-11-20 05:33:50.366377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.560 BaseBdev4 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.560 [ 00:23:18.560 { 00:23:18.560 "name": "BaseBdev4", 00:23:18.560 "aliases": [ 00:23:18.560 "fcf9776e-18f9-481f-a568-5b423acfff21" 00:23:18.560 ], 00:23:18.560 "product_name": "Malloc disk", 00:23:18.560 "block_size": 512, 00:23:18.560 "num_blocks": 65536, 00:23:18.560 "uuid": "fcf9776e-18f9-481f-a568-5b423acfff21", 00:23:18.560 "assigned_rate_limits": { 00:23:18.560 "rw_ios_per_sec": 0, 00:23:18.560 "rw_mbytes_per_sec": 0, 00:23:18.560 "r_mbytes_per_sec": 0, 00:23:18.560 "w_mbytes_per_sec": 0 00:23:18.560 }, 00:23:18.560 "claimed": true, 00:23:18.560 "claim_type": "exclusive_write", 00:23:18.560 "zoned": false, 00:23:18.560 "supported_io_types": { 00:23:18.560 "read": true, 00:23:18.560 "write": true, 00:23:18.560 "unmap": true, 00:23:18.560 "flush": true, 00:23:18.560 "reset": true, 00:23:18.560 "nvme_admin": false, 00:23:18.560 "nvme_io": false, 00:23:18.560 "nvme_io_md": false, 00:23:18.560 "write_zeroes": true, 00:23:18.560 "zcopy": true, 00:23:18.560 "get_zone_info": false, 00:23:18.560 "zone_management": false, 00:23:18.560 "zone_append": false, 00:23:18.560 "compare": false, 00:23:18.560 "compare_and_write": false, 00:23:18.560 "abort": true, 00:23:18.560 "seek_hole": false, 00:23:18.560 "seek_data": false, 00:23:18.560 "copy": true, 00:23:18.560 "nvme_iov_md": false 00:23:18.560 }, 00:23:18.560 "memory_domains": [ 00:23:18.560 { 00:23:18.560 "dma_device_id": "system", 00:23:18.560 "dma_device_type": 1 00:23:18.560 }, 00:23:18.560 { 00:23:18.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.560 "dma_device_type": 2 00:23:18.560 } 00:23:18.560 ], 00:23:18.560 "driver_specific": {} 00:23:18.560 } 00:23:18.560 ] 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.560 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.819 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.819 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.819 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.819 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.819 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.819 "name": "Existed_Raid", 00:23:18.819 "uuid": "89926c05-fdb0-43c5-af34-0c4d72ced0dd", 00:23:18.819 "strip_size_kb": 64, 00:23:18.819 "state": "online", 00:23:18.819 "raid_level": "raid5f", 00:23:18.819 "superblock": false, 00:23:18.819 "num_base_bdevs": 4, 00:23:18.819 "num_base_bdevs_discovered": 4, 00:23:18.819 "num_base_bdevs_operational": 4, 00:23:18.819 "base_bdevs_list": [ 00:23:18.819 { 00:23:18.819 "name": "BaseBdev1", 00:23:18.819 "uuid": "ac447400-c360-42ff-a7f6-277be19bdeae", 00:23:18.819 "is_configured": true, 00:23:18.819 "data_offset": 0, 00:23:18.819 "data_size": 65536 00:23:18.819 }, 00:23:18.819 { 00:23:18.819 "name": "BaseBdev2", 00:23:18.819 "uuid": "6183f6cc-da41-40b7-acd6-960cc162874a", 00:23:18.819 "is_configured": true, 00:23:18.819 "data_offset": 0, 00:23:18.819 "data_size": 65536 00:23:18.819 }, 00:23:18.819 { 00:23:18.819 "name": "BaseBdev3", 00:23:18.819 "uuid": "5787acfb-0cac-4a00-b827-cd09f612f971", 00:23:18.819 "is_configured": true, 00:23:18.819 "data_offset": 0, 00:23:18.819 "data_size": 65536 00:23:18.819 }, 00:23:18.819 { 00:23:18.819 "name": "BaseBdev4", 00:23:18.819 "uuid": "fcf9776e-18f9-481f-a568-5b423acfff21", 00:23:18.819 "is_configured": true, 00:23:18.819 "data_offset": 0, 00:23:18.819 "data_size": 65536 00:23:18.819 } 00:23:18.819 ] 00:23:18.819 }' 00:23:18.819 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.819 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.122 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:19.122 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.123 [2024-11-20 05:33:50.698893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.123 "name": "Existed_Raid", 00:23:19.123 "aliases": [ 00:23:19.123 "89926c05-fdb0-43c5-af34-0c4d72ced0dd" 00:23:19.123 ], 00:23:19.123 "product_name": "Raid Volume", 00:23:19.123 "block_size": 512, 00:23:19.123 "num_blocks": 196608, 00:23:19.123 "uuid": "89926c05-fdb0-43c5-af34-0c4d72ced0dd", 00:23:19.123 "assigned_rate_limits": { 00:23:19.123 "rw_ios_per_sec": 0, 00:23:19.123 "rw_mbytes_per_sec": 0, 00:23:19.123 "r_mbytes_per_sec": 0, 00:23:19.123 "w_mbytes_per_sec": 0 00:23:19.123 }, 00:23:19.123 "claimed": false, 00:23:19.123 "zoned": false, 00:23:19.123 "supported_io_types": { 00:23:19.123 "read": true, 00:23:19.123 "write": true, 00:23:19.123 "unmap": false, 00:23:19.123 "flush": false, 00:23:19.123 "reset": true, 00:23:19.123 "nvme_admin": false, 00:23:19.123 "nvme_io": false, 00:23:19.123 "nvme_io_md": false, 00:23:19.123 "write_zeroes": true, 00:23:19.123 "zcopy": false, 00:23:19.123 "get_zone_info": false, 00:23:19.123 "zone_management": false, 00:23:19.123 "zone_append": false, 00:23:19.123 "compare": false, 00:23:19.123 "compare_and_write": false, 00:23:19.123 "abort": false, 00:23:19.123 "seek_hole": false, 00:23:19.123 "seek_data": false, 00:23:19.123 "copy": false, 00:23:19.123 "nvme_iov_md": false 00:23:19.123 }, 00:23:19.123 "driver_specific": { 00:23:19.123 "raid": { 00:23:19.123 "uuid": "89926c05-fdb0-43c5-af34-0c4d72ced0dd", 00:23:19.123 "strip_size_kb": 64, 00:23:19.123 "state": "online", 00:23:19.123 "raid_level": "raid5f", 00:23:19.123 "superblock": false, 00:23:19.123 "num_base_bdevs": 4, 00:23:19.123 "num_base_bdevs_discovered": 4, 00:23:19.123 "num_base_bdevs_operational": 4, 00:23:19.123 "base_bdevs_list": [ 00:23:19.123 { 00:23:19.123 "name": "BaseBdev1", 00:23:19.123 "uuid": "ac447400-c360-42ff-a7f6-277be19bdeae", 00:23:19.123 "is_configured": true, 00:23:19.123 "data_offset": 0, 00:23:19.123 "data_size": 65536 00:23:19.123 }, 00:23:19.123 { 00:23:19.123 "name": "BaseBdev2", 00:23:19.123 "uuid": "6183f6cc-da41-40b7-acd6-960cc162874a", 00:23:19.123 "is_configured": true, 00:23:19.123 "data_offset": 0, 00:23:19.123 "data_size": 65536 00:23:19.123 }, 00:23:19.123 { 00:23:19.123 "name": "BaseBdev3", 00:23:19.123 "uuid": "5787acfb-0cac-4a00-b827-cd09f612f971", 00:23:19.123 "is_configured": true, 00:23:19.123 "data_offset": 0, 00:23:19.123 "data_size": 65536 00:23:19.123 }, 00:23:19.123 { 00:23:19.123 "name": "BaseBdev4", 00:23:19.123 "uuid": "fcf9776e-18f9-481f-a568-5b423acfff21", 00:23:19.123 "is_configured": true, 00:23:19.123 "data_offset": 0, 00:23:19.123 "data_size": 65536 00:23:19.123 } 00:23:19.123 ] 00:23:19.123 } 00:23:19.123 } 00:23:19.123 }' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:19.123 BaseBdev2 00:23:19.123 BaseBdev3 00:23:19.123 BaseBdev4' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.123 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.124 [2024-11-20 05:33:50.898791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.124 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.381 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.381 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.381 "name": "Existed_Raid", 00:23:19.381 "uuid": "89926c05-fdb0-43c5-af34-0c4d72ced0dd", 00:23:19.381 "strip_size_kb": 64, 00:23:19.381 "state": "online", 00:23:19.381 "raid_level": "raid5f", 00:23:19.381 "superblock": false, 00:23:19.381 "num_base_bdevs": 4, 00:23:19.381 "num_base_bdevs_discovered": 3, 00:23:19.381 "num_base_bdevs_operational": 3, 00:23:19.381 "base_bdevs_list": [ 00:23:19.381 { 00:23:19.381 "name": null, 00:23:19.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.381 "is_configured": false, 00:23:19.381 "data_offset": 0, 00:23:19.381 "data_size": 65536 00:23:19.381 }, 00:23:19.381 { 00:23:19.381 "name": "BaseBdev2", 00:23:19.381 "uuid": "6183f6cc-da41-40b7-acd6-960cc162874a", 00:23:19.381 "is_configured": true, 00:23:19.381 "data_offset": 0, 00:23:19.381 "data_size": 65536 00:23:19.381 }, 00:23:19.381 { 00:23:19.381 "name": "BaseBdev3", 00:23:19.381 "uuid": "5787acfb-0cac-4a00-b827-cd09f612f971", 00:23:19.381 "is_configured": true, 00:23:19.381 "data_offset": 0, 00:23:19.381 "data_size": 65536 00:23:19.381 }, 00:23:19.381 { 00:23:19.381 "name": "BaseBdev4", 00:23:19.381 "uuid": "fcf9776e-18f9-481f-a568-5b423acfff21", 00:23:19.381 "is_configured": true, 00:23:19.381 "data_offset": 0, 00:23:19.381 "data_size": 65536 00:23:19.381 } 00:23:19.381 ] 00:23:19.381 }' 00:23:19.381 05:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.381 05:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.640 [2024-11-20 05:33:51.324878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:19.640 [2024-11-20 05:33:51.324966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:19.640 [2024-11-20 05:33:51.371667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.640 [2024-11-20 05:33:51.411700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.640 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.900 [2024-11-20 05:33:51.513898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:19.900 [2024-11-20 05:33:51.513939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.900 BaseBdev2 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.900 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.901 [ 00:23:19.901 { 00:23:19.901 "name": "BaseBdev2", 00:23:19.901 "aliases": [ 00:23:19.901 "8613b163-11de-4938-a6ae-19d7710f8a4d" 00:23:19.901 ], 00:23:19.901 "product_name": "Malloc disk", 00:23:19.901 "block_size": 512, 00:23:19.901 "num_blocks": 65536, 00:23:19.901 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:19.901 "assigned_rate_limits": { 00:23:19.901 "rw_ios_per_sec": 0, 00:23:19.901 "rw_mbytes_per_sec": 0, 00:23:19.901 "r_mbytes_per_sec": 0, 00:23:19.901 "w_mbytes_per_sec": 0 00:23:19.901 }, 00:23:19.901 "claimed": false, 00:23:19.901 "zoned": false, 00:23:19.901 "supported_io_types": { 00:23:19.901 "read": true, 00:23:19.901 "write": true, 00:23:19.901 "unmap": true, 00:23:19.901 "flush": true, 00:23:19.901 "reset": true, 00:23:19.901 "nvme_admin": false, 00:23:19.901 "nvme_io": false, 00:23:19.901 "nvme_io_md": false, 00:23:19.901 "write_zeroes": true, 00:23:19.901 "zcopy": true, 00:23:19.901 "get_zone_info": false, 00:23:19.901 "zone_management": false, 00:23:19.901 "zone_append": false, 00:23:19.901 "compare": false, 00:23:19.901 "compare_and_write": false, 00:23:19.901 "abort": true, 00:23:19.901 "seek_hole": false, 00:23:19.901 "seek_data": false, 00:23:19.901 "copy": true, 00:23:19.901 "nvme_iov_md": false 00:23:19.901 }, 00:23:19.901 "memory_domains": [ 00:23:19.901 { 00:23:19.901 "dma_device_id": "system", 00:23:19.901 "dma_device_type": 1 00:23:19.901 }, 00:23:19.901 { 00:23:19.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.901 "dma_device_type": 2 00:23:19.901 } 00:23:19.901 ], 00:23:19.901 "driver_specific": {} 00:23:19.901 } 00:23:19.901 ] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.901 BaseBdev3 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.901 [ 00:23:19.901 { 00:23:19.901 "name": "BaseBdev3", 00:23:19.901 "aliases": [ 00:23:19.901 "9a01f7c8-a701-43b5-935e-c8b0f6953205" 00:23:19.901 ], 00:23:19.901 "product_name": "Malloc disk", 00:23:19.901 "block_size": 512, 00:23:19.901 "num_blocks": 65536, 00:23:19.901 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:19.901 "assigned_rate_limits": { 00:23:19.901 "rw_ios_per_sec": 0, 00:23:19.901 "rw_mbytes_per_sec": 0, 00:23:19.901 "r_mbytes_per_sec": 0, 00:23:19.901 "w_mbytes_per_sec": 0 00:23:19.901 }, 00:23:19.901 "claimed": false, 00:23:19.901 "zoned": false, 00:23:19.901 "supported_io_types": { 00:23:19.901 "read": true, 00:23:19.901 "write": true, 00:23:19.901 "unmap": true, 00:23:19.901 "flush": true, 00:23:19.901 "reset": true, 00:23:19.901 "nvme_admin": false, 00:23:19.901 "nvme_io": false, 00:23:19.901 "nvme_io_md": false, 00:23:19.901 "write_zeroes": true, 00:23:19.901 "zcopy": true, 00:23:19.901 "get_zone_info": false, 00:23:19.901 "zone_management": false, 00:23:19.901 "zone_append": false, 00:23:19.901 "compare": false, 00:23:19.901 "compare_and_write": false, 00:23:19.901 "abort": true, 00:23:19.901 "seek_hole": false, 00:23:19.901 "seek_data": false, 00:23:19.901 "copy": true, 00:23:19.901 "nvme_iov_md": false 00:23:19.901 }, 00:23:19.901 "memory_domains": [ 00:23:19.901 { 00:23:19.901 "dma_device_id": "system", 00:23:19.901 "dma_device_type": 1 00:23:19.901 }, 00:23:19.901 { 00:23:19.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.901 "dma_device_type": 2 00:23:19.901 } 00:23:19.901 ], 00:23:19.901 "driver_specific": {} 00:23:19.901 } 00:23:19.901 ] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.901 BaseBdev4 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.901 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.902 [ 00:23:19.902 { 00:23:19.902 "name": "BaseBdev4", 00:23:19.902 "aliases": [ 00:23:19.902 "2a4bf77e-699e-4eeb-a185-a3b519dc66ea" 00:23:19.902 ], 00:23:19.902 "product_name": "Malloc disk", 00:23:19.902 "block_size": 512, 00:23:19.902 "num_blocks": 65536, 00:23:19.902 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:19.902 "assigned_rate_limits": { 00:23:19.902 "rw_ios_per_sec": 0, 00:23:19.902 "rw_mbytes_per_sec": 0, 00:23:19.902 "r_mbytes_per_sec": 0, 00:23:19.902 "w_mbytes_per_sec": 0 00:23:19.902 }, 00:23:19.902 "claimed": false, 00:23:19.902 "zoned": false, 00:23:19.902 "supported_io_types": { 00:23:19.902 "read": true, 00:23:19.902 "write": true, 00:23:19.902 "unmap": true, 00:23:19.902 "flush": true, 00:23:19.902 "reset": true, 00:23:19.902 "nvme_admin": false, 00:23:19.902 "nvme_io": false, 00:23:19.902 "nvme_io_md": false, 00:23:19.902 "write_zeroes": true, 00:23:19.902 "zcopy": true, 00:23:19.902 "get_zone_info": false, 00:23:19.902 "zone_management": false, 00:23:19.902 "zone_append": false, 00:23:19.902 "compare": false, 00:23:19.902 "compare_and_write": false, 00:23:19.902 "abort": true, 00:23:19.902 "seek_hole": false, 00:23:19.902 "seek_data": false, 00:23:19.902 "copy": true, 00:23:19.902 "nvme_iov_md": false 00:23:19.902 }, 00:23:19.902 "memory_domains": [ 00:23:19.902 { 00:23:19.902 "dma_device_id": "system", 00:23:19.902 "dma_device_type": 1 00:23:19.902 }, 00:23:19.902 { 00:23:19.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.902 "dma_device_type": 2 00:23:19.902 } 00:23:19.902 ], 00:23:19.902 "driver_specific": {} 00:23:19.902 } 00:23:19.902 ] 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:19.902 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.161 [2024-11-20 05:33:51.738640] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:20.161 [2024-11-20 05:33:51.738679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:20.161 [2024-11-20 05:33:51.738697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:20.161 [2024-11-20 05:33:51.740192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:20.161 [2024-11-20 05:33:51.740237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.161 "name": "Existed_Raid", 00:23:20.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.161 "strip_size_kb": 64, 00:23:20.161 "state": "configuring", 00:23:20.161 "raid_level": "raid5f", 00:23:20.161 "superblock": false, 00:23:20.161 "num_base_bdevs": 4, 00:23:20.161 "num_base_bdevs_discovered": 3, 00:23:20.161 "num_base_bdevs_operational": 4, 00:23:20.161 "base_bdevs_list": [ 00:23:20.161 { 00:23:20.161 "name": "BaseBdev1", 00:23:20.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.161 "is_configured": false, 00:23:20.161 "data_offset": 0, 00:23:20.161 "data_size": 0 00:23:20.161 }, 00:23:20.161 { 00:23:20.161 "name": "BaseBdev2", 00:23:20.161 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:20.161 "is_configured": true, 00:23:20.161 "data_offset": 0, 00:23:20.161 "data_size": 65536 00:23:20.161 }, 00:23:20.161 { 00:23:20.161 "name": "BaseBdev3", 00:23:20.161 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:20.161 "is_configured": true, 00:23:20.161 "data_offset": 0, 00:23:20.161 "data_size": 65536 00:23:20.161 }, 00:23:20.161 { 00:23:20.161 "name": "BaseBdev4", 00:23:20.161 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:20.161 "is_configured": true, 00:23:20.161 "data_offset": 0, 00:23:20.161 "data_size": 65536 00:23:20.161 } 00:23:20.161 ] 00:23:20.161 }' 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.161 05:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.420 [2024-11-20 05:33:52.086716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.420 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.420 "name": "Existed_Raid", 00:23:20.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.420 "strip_size_kb": 64, 00:23:20.420 "state": "configuring", 00:23:20.420 "raid_level": "raid5f", 00:23:20.420 "superblock": false, 00:23:20.420 "num_base_bdevs": 4, 00:23:20.420 "num_base_bdevs_discovered": 2, 00:23:20.420 "num_base_bdevs_operational": 4, 00:23:20.420 "base_bdevs_list": [ 00:23:20.420 { 00:23:20.420 "name": "BaseBdev1", 00:23:20.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.420 "is_configured": false, 00:23:20.420 "data_offset": 0, 00:23:20.420 "data_size": 0 00:23:20.420 }, 00:23:20.420 { 00:23:20.420 "name": null, 00:23:20.420 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:20.420 "is_configured": false, 00:23:20.420 "data_offset": 0, 00:23:20.420 "data_size": 65536 00:23:20.420 }, 00:23:20.420 { 00:23:20.420 "name": "BaseBdev3", 00:23:20.420 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:20.420 "is_configured": true, 00:23:20.420 "data_offset": 0, 00:23:20.420 "data_size": 65536 00:23:20.420 }, 00:23:20.420 { 00:23:20.420 "name": "BaseBdev4", 00:23:20.420 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:20.420 "is_configured": true, 00:23:20.420 "data_offset": 0, 00:23:20.420 "data_size": 65536 00:23:20.420 } 00:23:20.420 ] 00:23:20.421 }' 00:23:20.421 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.421 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.678 [2024-11-20 05:33:52.493394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:20.678 BaseBdev1 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:20.678 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.679 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.937 [ 00:23:20.937 { 00:23:20.937 "name": "BaseBdev1", 00:23:20.937 "aliases": [ 00:23:20.937 "bc1bac45-ef14-41e8-ae41-f380cce78900" 00:23:20.937 ], 00:23:20.937 "product_name": "Malloc disk", 00:23:20.937 "block_size": 512, 00:23:20.937 "num_blocks": 65536, 00:23:20.937 "uuid": "bc1bac45-ef14-41e8-ae41-f380cce78900", 00:23:20.937 "assigned_rate_limits": { 00:23:20.937 "rw_ios_per_sec": 0, 00:23:20.937 "rw_mbytes_per_sec": 0, 00:23:20.937 "r_mbytes_per_sec": 0, 00:23:20.937 "w_mbytes_per_sec": 0 00:23:20.937 }, 00:23:20.937 "claimed": true, 00:23:20.937 "claim_type": "exclusive_write", 00:23:20.937 "zoned": false, 00:23:20.937 "supported_io_types": { 00:23:20.937 "read": true, 00:23:20.937 "write": true, 00:23:20.937 "unmap": true, 00:23:20.937 "flush": true, 00:23:20.937 "reset": true, 00:23:20.937 "nvme_admin": false, 00:23:20.937 "nvme_io": false, 00:23:20.937 "nvme_io_md": false, 00:23:20.937 "write_zeroes": true, 00:23:20.937 "zcopy": true, 00:23:20.937 "get_zone_info": false, 00:23:20.937 "zone_management": false, 00:23:20.937 "zone_append": false, 00:23:20.937 "compare": false, 00:23:20.937 "compare_and_write": false, 00:23:20.937 "abort": true, 00:23:20.937 "seek_hole": false, 00:23:20.937 "seek_data": false, 00:23:20.937 "copy": true, 00:23:20.937 "nvme_iov_md": false 00:23:20.937 }, 00:23:20.937 "memory_domains": [ 00:23:20.937 { 00:23:20.937 "dma_device_id": "system", 00:23:20.937 "dma_device_type": 1 00:23:20.937 }, 00:23:20.937 { 00:23:20.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.937 "dma_device_type": 2 00:23:20.937 } 00:23:20.937 ], 00:23:20.937 "driver_specific": {} 00:23:20.937 } 00:23:20.937 ] 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.937 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.937 "name": "Existed_Raid", 00:23:20.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.937 "strip_size_kb": 64, 00:23:20.937 "state": "configuring", 00:23:20.937 "raid_level": "raid5f", 00:23:20.937 "superblock": false, 00:23:20.937 "num_base_bdevs": 4, 00:23:20.937 "num_base_bdevs_discovered": 3, 00:23:20.937 "num_base_bdevs_operational": 4, 00:23:20.937 "base_bdevs_list": [ 00:23:20.937 { 00:23:20.937 "name": "BaseBdev1", 00:23:20.937 "uuid": "bc1bac45-ef14-41e8-ae41-f380cce78900", 00:23:20.937 "is_configured": true, 00:23:20.938 "data_offset": 0, 00:23:20.938 "data_size": 65536 00:23:20.938 }, 00:23:20.938 { 00:23:20.938 "name": null, 00:23:20.938 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:20.938 "is_configured": false, 00:23:20.938 "data_offset": 0, 00:23:20.938 "data_size": 65536 00:23:20.938 }, 00:23:20.938 { 00:23:20.938 "name": "BaseBdev3", 00:23:20.938 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:20.938 "is_configured": true, 00:23:20.938 "data_offset": 0, 00:23:20.938 "data_size": 65536 00:23:20.938 }, 00:23:20.938 { 00:23:20.938 "name": "BaseBdev4", 00:23:20.938 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:20.938 "is_configured": true, 00:23:20.938 "data_offset": 0, 00:23:20.938 "data_size": 65536 00:23:20.938 } 00:23:20.938 ] 00:23:20.938 }' 00:23:20.938 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.938 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.195 [2024-11-20 05:33:52.857531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.195 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.196 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.196 "name": "Existed_Raid", 00:23:21.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.196 "strip_size_kb": 64, 00:23:21.196 "state": "configuring", 00:23:21.196 "raid_level": "raid5f", 00:23:21.196 "superblock": false, 00:23:21.196 "num_base_bdevs": 4, 00:23:21.196 "num_base_bdevs_discovered": 2, 00:23:21.196 "num_base_bdevs_operational": 4, 00:23:21.196 "base_bdevs_list": [ 00:23:21.196 { 00:23:21.196 "name": "BaseBdev1", 00:23:21.196 "uuid": "bc1bac45-ef14-41e8-ae41-f380cce78900", 00:23:21.196 "is_configured": true, 00:23:21.196 "data_offset": 0, 00:23:21.196 "data_size": 65536 00:23:21.196 }, 00:23:21.196 { 00:23:21.196 "name": null, 00:23:21.196 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:21.196 "is_configured": false, 00:23:21.196 "data_offset": 0, 00:23:21.196 "data_size": 65536 00:23:21.196 }, 00:23:21.196 { 00:23:21.196 "name": null, 00:23:21.196 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:21.196 "is_configured": false, 00:23:21.196 "data_offset": 0, 00:23:21.196 "data_size": 65536 00:23:21.196 }, 00:23:21.196 { 00:23:21.196 "name": "BaseBdev4", 00:23:21.196 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:21.196 "is_configured": true, 00:23:21.196 "data_offset": 0, 00:23:21.196 "data_size": 65536 00:23:21.196 } 00:23:21.196 ] 00:23:21.196 }' 00:23:21.196 05:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.196 05:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.454 [2024-11-20 05:33:53.197578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.454 "name": "Existed_Raid", 00:23:21.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.454 "strip_size_kb": 64, 00:23:21.454 "state": "configuring", 00:23:21.454 "raid_level": "raid5f", 00:23:21.454 "superblock": false, 00:23:21.454 "num_base_bdevs": 4, 00:23:21.454 "num_base_bdevs_discovered": 3, 00:23:21.454 "num_base_bdevs_operational": 4, 00:23:21.454 "base_bdevs_list": [ 00:23:21.454 { 00:23:21.454 "name": "BaseBdev1", 00:23:21.454 "uuid": "bc1bac45-ef14-41e8-ae41-f380cce78900", 00:23:21.454 "is_configured": true, 00:23:21.454 "data_offset": 0, 00:23:21.454 "data_size": 65536 00:23:21.454 }, 00:23:21.454 { 00:23:21.454 "name": null, 00:23:21.454 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:21.454 "is_configured": false, 00:23:21.454 "data_offset": 0, 00:23:21.454 "data_size": 65536 00:23:21.454 }, 00:23:21.454 { 00:23:21.454 "name": "BaseBdev3", 00:23:21.454 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:21.454 "is_configured": true, 00:23:21.454 "data_offset": 0, 00:23:21.454 "data_size": 65536 00:23:21.454 }, 00:23:21.454 { 00:23:21.454 "name": "BaseBdev4", 00:23:21.454 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:21.454 "is_configured": true, 00:23:21.454 "data_offset": 0, 00:23:21.454 "data_size": 65536 00:23:21.454 } 00:23:21.454 ] 00:23:21.454 }' 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.454 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.711 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.711 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.711 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:21.711 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.711 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.711 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:21.711 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:21.711 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.711 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.711 [2024-11-20 05:33:53.537659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.969 "name": "Existed_Raid", 00:23:21.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.969 "strip_size_kb": 64, 00:23:21.969 "state": "configuring", 00:23:21.969 "raid_level": "raid5f", 00:23:21.969 "superblock": false, 00:23:21.969 "num_base_bdevs": 4, 00:23:21.969 "num_base_bdevs_discovered": 2, 00:23:21.969 "num_base_bdevs_operational": 4, 00:23:21.969 "base_bdevs_list": [ 00:23:21.969 { 00:23:21.969 "name": null, 00:23:21.969 "uuid": "bc1bac45-ef14-41e8-ae41-f380cce78900", 00:23:21.969 "is_configured": false, 00:23:21.969 "data_offset": 0, 00:23:21.969 "data_size": 65536 00:23:21.969 }, 00:23:21.969 { 00:23:21.969 "name": null, 00:23:21.969 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:21.969 "is_configured": false, 00:23:21.969 "data_offset": 0, 00:23:21.969 "data_size": 65536 00:23:21.969 }, 00:23:21.969 { 00:23:21.969 "name": "BaseBdev3", 00:23:21.969 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:21.969 "is_configured": true, 00:23:21.969 "data_offset": 0, 00:23:21.969 "data_size": 65536 00:23:21.969 }, 00:23:21.969 { 00:23:21.969 "name": "BaseBdev4", 00:23:21.969 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:21.969 "is_configured": true, 00:23:21.969 "data_offset": 0, 00:23:21.969 "data_size": 65536 00:23:21.969 } 00:23:21.969 ] 00:23:21.969 }' 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.969 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.227 [2024-11-20 05:33:53.940187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.227 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.227 "name": "Existed_Raid", 00:23:22.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.228 "strip_size_kb": 64, 00:23:22.228 "state": "configuring", 00:23:22.228 "raid_level": "raid5f", 00:23:22.228 "superblock": false, 00:23:22.228 "num_base_bdevs": 4, 00:23:22.228 "num_base_bdevs_discovered": 3, 00:23:22.228 "num_base_bdevs_operational": 4, 00:23:22.228 "base_bdevs_list": [ 00:23:22.228 { 00:23:22.228 "name": null, 00:23:22.228 "uuid": "bc1bac45-ef14-41e8-ae41-f380cce78900", 00:23:22.228 "is_configured": false, 00:23:22.228 "data_offset": 0, 00:23:22.228 "data_size": 65536 00:23:22.228 }, 00:23:22.228 { 00:23:22.228 "name": "BaseBdev2", 00:23:22.228 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:22.228 "is_configured": true, 00:23:22.228 "data_offset": 0, 00:23:22.228 "data_size": 65536 00:23:22.228 }, 00:23:22.228 { 00:23:22.228 "name": "BaseBdev3", 00:23:22.228 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:22.228 "is_configured": true, 00:23:22.228 "data_offset": 0, 00:23:22.228 "data_size": 65536 00:23:22.228 }, 00:23:22.228 { 00:23:22.228 "name": "BaseBdev4", 00:23:22.228 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:22.228 "is_configured": true, 00:23:22.228 "data_offset": 0, 00:23:22.228 "data_size": 65536 00:23:22.228 } 00:23:22.228 ] 00:23:22.228 }' 00:23:22.228 05:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.228 05:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bc1bac45-ef14-41e8-ae41-f380cce78900 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.530 [2024-11-20 05:33:54.298522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:22.530 [2024-11-20 05:33:54.298577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:22.530 [2024-11-20 05:33:54.298583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:22.530 [2024-11-20 05:33:54.298779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:22.530 [2024-11-20 05:33:54.302555] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:22.530 [2024-11-20 05:33:54.302581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:22.530 [2024-11-20 05:33:54.302791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.530 NewBaseBdev 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.530 [ 00:23:22.530 { 00:23:22.530 "name": "NewBaseBdev", 00:23:22.530 "aliases": [ 00:23:22.530 "bc1bac45-ef14-41e8-ae41-f380cce78900" 00:23:22.530 ], 00:23:22.530 "product_name": "Malloc disk", 00:23:22.530 "block_size": 512, 00:23:22.530 "num_blocks": 65536, 00:23:22.530 "uuid": "bc1bac45-ef14-41e8-ae41-f380cce78900", 00:23:22.530 "assigned_rate_limits": { 00:23:22.530 "rw_ios_per_sec": 0, 00:23:22.530 "rw_mbytes_per_sec": 0, 00:23:22.530 "r_mbytes_per_sec": 0, 00:23:22.530 "w_mbytes_per_sec": 0 00:23:22.530 }, 00:23:22.530 "claimed": true, 00:23:22.530 "claim_type": "exclusive_write", 00:23:22.530 "zoned": false, 00:23:22.530 "supported_io_types": { 00:23:22.530 "read": true, 00:23:22.530 "write": true, 00:23:22.530 "unmap": true, 00:23:22.530 "flush": true, 00:23:22.530 "reset": true, 00:23:22.530 "nvme_admin": false, 00:23:22.530 "nvme_io": false, 00:23:22.530 "nvme_io_md": false, 00:23:22.530 "write_zeroes": true, 00:23:22.530 "zcopy": true, 00:23:22.530 "get_zone_info": false, 00:23:22.530 "zone_management": false, 00:23:22.530 "zone_append": false, 00:23:22.530 "compare": false, 00:23:22.530 "compare_and_write": false, 00:23:22.530 "abort": true, 00:23:22.530 "seek_hole": false, 00:23:22.530 "seek_data": false, 00:23:22.530 "copy": true, 00:23:22.530 "nvme_iov_md": false 00:23:22.530 }, 00:23:22.530 "memory_domains": [ 00:23:22.530 { 00:23:22.530 "dma_device_id": "system", 00:23:22.530 "dma_device_type": 1 00:23:22.530 }, 00:23:22.530 { 00:23:22.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.530 "dma_device_type": 2 00:23:22.530 } 00:23:22.530 ], 00:23:22.530 "driver_specific": {} 00:23:22.530 } 00:23:22.530 ] 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.530 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.809 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.809 "name": "Existed_Raid", 00:23:22.809 "uuid": "332c9321-78e5-4378-931a-eee9370969e8", 00:23:22.809 "strip_size_kb": 64, 00:23:22.809 "state": "online", 00:23:22.809 "raid_level": "raid5f", 00:23:22.809 "superblock": false, 00:23:22.809 "num_base_bdevs": 4, 00:23:22.809 "num_base_bdevs_discovered": 4, 00:23:22.809 "num_base_bdevs_operational": 4, 00:23:22.809 "base_bdevs_list": [ 00:23:22.809 { 00:23:22.809 "name": "NewBaseBdev", 00:23:22.809 "uuid": "bc1bac45-ef14-41e8-ae41-f380cce78900", 00:23:22.809 "is_configured": true, 00:23:22.809 "data_offset": 0, 00:23:22.809 "data_size": 65536 00:23:22.809 }, 00:23:22.809 { 00:23:22.809 "name": "BaseBdev2", 00:23:22.809 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:22.809 "is_configured": true, 00:23:22.809 "data_offset": 0, 00:23:22.809 "data_size": 65536 00:23:22.809 }, 00:23:22.809 { 00:23:22.809 "name": "BaseBdev3", 00:23:22.809 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:22.809 "is_configured": true, 00:23:22.809 "data_offset": 0, 00:23:22.809 "data_size": 65536 00:23:22.809 }, 00:23:22.809 { 00:23:22.809 "name": "BaseBdev4", 00:23:22.809 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:22.809 "is_configured": true, 00:23:22.809 "data_offset": 0, 00:23:22.809 "data_size": 65536 00:23:22.809 } 00:23:22.809 ] 00:23:22.809 }' 00:23:22.809 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.809 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.068 [2024-11-20 05:33:54.667276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:23.068 "name": "Existed_Raid", 00:23:23.068 "aliases": [ 00:23:23.068 "332c9321-78e5-4378-931a-eee9370969e8" 00:23:23.068 ], 00:23:23.068 "product_name": "Raid Volume", 00:23:23.068 "block_size": 512, 00:23:23.068 "num_blocks": 196608, 00:23:23.068 "uuid": "332c9321-78e5-4378-931a-eee9370969e8", 00:23:23.068 "assigned_rate_limits": { 00:23:23.068 "rw_ios_per_sec": 0, 00:23:23.068 "rw_mbytes_per_sec": 0, 00:23:23.068 "r_mbytes_per_sec": 0, 00:23:23.068 "w_mbytes_per_sec": 0 00:23:23.068 }, 00:23:23.068 "claimed": false, 00:23:23.068 "zoned": false, 00:23:23.068 "supported_io_types": { 00:23:23.068 "read": true, 00:23:23.068 "write": true, 00:23:23.068 "unmap": false, 00:23:23.068 "flush": false, 00:23:23.068 "reset": true, 00:23:23.068 "nvme_admin": false, 00:23:23.068 "nvme_io": false, 00:23:23.068 "nvme_io_md": false, 00:23:23.068 "write_zeroes": true, 00:23:23.068 "zcopy": false, 00:23:23.068 "get_zone_info": false, 00:23:23.068 "zone_management": false, 00:23:23.068 "zone_append": false, 00:23:23.068 "compare": false, 00:23:23.068 "compare_and_write": false, 00:23:23.068 "abort": false, 00:23:23.068 "seek_hole": false, 00:23:23.068 "seek_data": false, 00:23:23.068 "copy": false, 00:23:23.068 "nvme_iov_md": false 00:23:23.068 }, 00:23:23.068 "driver_specific": { 00:23:23.068 "raid": { 00:23:23.068 "uuid": "332c9321-78e5-4378-931a-eee9370969e8", 00:23:23.068 "strip_size_kb": 64, 00:23:23.068 "state": "online", 00:23:23.068 "raid_level": "raid5f", 00:23:23.068 "superblock": false, 00:23:23.068 "num_base_bdevs": 4, 00:23:23.068 "num_base_bdevs_discovered": 4, 00:23:23.068 "num_base_bdevs_operational": 4, 00:23:23.068 "base_bdevs_list": [ 00:23:23.068 { 00:23:23.068 "name": "NewBaseBdev", 00:23:23.068 "uuid": "bc1bac45-ef14-41e8-ae41-f380cce78900", 00:23:23.068 "is_configured": true, 00:23:23.068 "data_offset": 0, 00:23:23.068 "data_size": 65536 00:23:23.068 }, 00:23:23.068 { 00:23:23.068 "name": "BaseBdev2", 00:23:23.068 "uuid": "8613b163-11de-4938-a6ae-19d7710f8a4d", 00:23:23.068 "is_configured": true, 00:23:23.068 "data_offset": 0, 00:23:23.068 "data_size": 65536 00:23:23.068 }, 00:23:23.068 { 00:23:23.068 "name": "BaseBdev3", 00:23:23.068 "uuid": "9a01f7c8-a701-43b5-935e-c8b0f6953205", 00:23:23.068 "is_configured": true, 00:23:23.068 "data_offset": 0, 00:23:23.068 "data_size": 65536 00:23:23.068 }, 00:23:23.068 { 00:23:23.068 "name": "BaseBdev4", 00:23:23.068 "uuid": "2a4bf77e-699e-4eeb-a185-a3b519dc66ea", 00:23:23.068 "is_configured": true, 00:23:23.068 "data_offset": 0, 00:23:23.068 "data_size": 65536 00:23:23.068 } 00:23:23.068 ] 00:23:23.068 } 00:23:23.068 } 00:23:23.068 }' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:23.068 BaseBdev2 00:23:23.068 BaseBdev3 00:23:23.068 BaseBdev4' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.068 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.068 [2024-11-20 05:33:54.883101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:23.068 [2024-11-20 05:33:54.883127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.069 [2024-11-20 05:33:54.883179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.069 [2024-11-20 05:33:54.883424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.069 [2024-11-20 05:33:54.883438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:23.069 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.069 05:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80499 00:23:23.069 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80499 ']' 00:23:23.069 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80499 00:23:23.069 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:23:23.069 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:23.069 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80499 00:23:23.327 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:23.327 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:23.327 killing process with pid 80499 00:23:23.327 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80499' 00:23:23.327 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80499 00:23:23.327 [2024-11-20 05:33:54.910890] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.327 05:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80499 00:23:23.327 [2024-11-20 05:33:55.105422] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:23.894 00:23:23.894 real 0m8.142s 00:23:23.894 user 0m13.212s 00:23:23.894 sys 0m1.390s 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.894 ************************************ 00:23:23.894 END TEST raid5f_state_function_test 00:23:23.894 ************************************ 00:23:23.894 05:33:55 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:23.894 05:33:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:23.894 05:33:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:23.894 05:33:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:23.894 ************************************ 00:23:23.894 START TEST raid5f_state_function_test_sb 00:23:23.894 ************************************ 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:23.894 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81138 00:23:24.153 Process raid pid: 81138 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81138' 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81138 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 81138 ']' 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:24.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.153 05:33:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:24.153 [2024-11-20 05:33:55.790173] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:24.153 [2024-11-20 05:33:55.790296] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.153 [2024-11-20 05:33:55.950159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.413 [2024-11-20 05:33:56.050768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.413 [2024-11-20 05:33:56.189621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.413 [2024-11-20 05:33:56.189661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.979 [2024-11-20 05:33:56.599999] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:24.979 [2024-11-20 05:33:56.600053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:24.979 [2024-11-20 05:33:56.600063] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:24.979 [2024-11-20 05:33:56.600073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:24.979 [2024-11-20 05:33:56.600080] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:24.979 [2024-11-20 05:33:56.600089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:24.979 [2024-11-20 05:33:56.600095] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:24.979 [2024-11-20 05:33:56.600103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.979 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.979 "name": "Existed_Raid", 00:23:24.979 "uuid": "11030eb7-3493-4104-a7a1-b79139651ba4", 00:23:24.979 "strip_size_kb": 64, 00:23:24.980 "state": "configuring", 00:23:24.980 "raid_level": "raid5f", 00:23:24.980 "superblock": true, 00:23:24.980 "num_base_bdevs": 4, 00:23:24.980 "num_base_bdevs_discovered": 0, 00:23:24.980 "num_base_bdevs_operational": 4, 00:23:24.980 "base_bdevs_list": [ 00:23:24.980 { 00:23:24.980 "name": "BaseBdev1", 00:23:24.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.980 "is_configured": false, 00:23:24.980 "data_offset": 0, 00:23:24.980 "data_size": 0 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "name": "BaseBdev2", 00:23:24.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.980 "is_configured": false, 00:23:24.980 "data_offset": 0, 00:23:24.980 "data_size": 0 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "name": "BaseBdev3", 00:23:24.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.980 "is_configured": false, 00:23:24.980 "data_offset": 0, 00:23:24.980 "data_size": 0 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "name": "BaseBdev4", 00:23:24.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.980 "is_configured": false, 00:23:24.980 "data_offset": 0, 00:23:24.980 "data_size": 0 00:23:24.980 } 00:23:24.980 ] 00:23:24.980 }' 00:23:24.980 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.980 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.239 [2024-11-20 05:33:56.912012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:25.239 [2024-11-20 05:33:56.912055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.239 [2024-11-20 05:33:56.920017] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:25.239 [2024-11-20 05:33:56.920055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:25.239 [2024-11-20 05:33:56.920063] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:25.239 [2024-11-20 05:33:56.920073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:25.239 [2024-11-20 05:33:56.920079] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:25.239 [2024-11-20 05:33:56.920088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:25.239 [2024-11-20 05:33:56.920094] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:25.239 [2024-11-20 05:33:56.920102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.239 [2024-11-20 05:33:56.952650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:25.239 BaseBdev1 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.239 [ 00:23:25.239 { 00:23:25.239 "name": "BaseBdev1", 00:23:25.239 "aliases": [ 00:23:25.239 "83a608d9-10c9-4e5c-9e5d-8718dceebb64" 00:23:25.239 ], 00:23:25.239 "product_name": "Malloc disk", 00:23:25.239 "block_size": 512, 00:23:25.239 "num_blocks": 65536, 00:23:25.239 "uuid": "83a608d9-10c9-4e5c-9e5d-8718dceebb64", 00:23:25.239 "assigned_rate_limits": { 00:23:25.239 "rw_ios_per_sec": 0, 00:23:25.239 "rw_mbytes_per_sec": 0, 00:23:25.239 "r_mbytes_per_sec": 0, 00:23:25.239 "w_mbytes_per_sec": 0 00:23:25.239 }, 00:23:25.239 "claimed": true, 00:23:25.239 "claim_type": "exclusive_write", 00:23:25.239 "zoned": false, 00:23:25.239 "supported_io_types": { 00:23:25.239 "read": true, 00:23:25.239 "write": true, 00:23:25.239 "unmap": true, 00:23:25.239 "flush": true, 00:23:25.239 "reset": true, 00:23:25.239 "nvme_admin": false, 00:23:25.239 "nvme_io": false, 00:23:25.239 "nvme_io_md": false, 00:23:25.239 "write_zeroes": true, 00:23:25.239 "zcopy": true, 00:23:25.239 "get_zone_info": false, 00:23:25.239 "zone_management": false, 00:23:25.239 "zone_append": false, 00:23:25.239 "compare": false, 00:23:25.239 "compare_and_write": false, 00:23:25.239 "abort": true, 00:23:25.239 "seek_hole": false, 00:23:25.239 "seek_data": false, 00:23:25.239 "copy": true, 00:23:25.239 "nvme_iov_md": false 00:23:25.239 }, 00:23:25.239 "memory_domains": [ 00:23:25.239 { 00:23:25.239 "dma_device_id": "system", 00:23:25.239 "dma_device_type": 1 00:23:25.239 }, 00:23:25.239 { 00:23:25.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.239 "dma_device_type": 2 00:23:25.239 } 00:23:25.239 ], 00:23:25.239 "driver_specific": {} 00:23:25.239 } 00:23:25.239 ] 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.239 05:33:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.239 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.239 "name": "Existed_Raid", 00:23:25.239 "uuid": "5bfdc212-7064-43e8-ba9c-54e824f1fd81", 00:23:25.239 "strip_size_kb": 64, 00:23:25.239 "state": "configuring", 00:23:25.239 "raid_level": "raid5f", 00:23:25.239 "superblock": true, 00:23:25.239 "num_base_bdevs": 4, 00:23:25.239 "num_base_bdevs_discovered": 1, 00:23:25.239 "num_base_bdevs_operational": 4, 00:23:25.239 "base_bdevs_list": [ 00:23:25.239 { 00:23:25.239 "name": "BaseBdev1", 00:23:25.239 "uuid": "83a608d9-10c9-4e5c-9e5d-8718dceebb64", 00:23:25.239 "is_configured": true, 00:23:25.239 "data_offset": 2048, 00:23:25.239 "data_size": 63488 00:23:25.239 }, 00:23:25.239 { 00:23:25.239 "name": "BaseBdev2", 00:23:25.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.239 "is_configured": false, 00:23:25.239 "data_offset": 0, 00:23:25.239 "data_size": 0 00:23:25.239 }, 00:23:25.239 { 00:23:25.239 "name": "BaseBdev3", 00:23:25.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.239 "is_configured": false, 00:23:25.239 "data_offset": 0, 00:23:25.239 "data_size": 0 00:23:25.239 }, 00:23:25.239 { 00:23:25.239 "name": "BaseBdev4", 00:23:25.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.239 "is_configured": false, 00:23:25.239 "data_offset": 0, 00:23:25.239 "data_size": 0 00:23:25.239 } 00:23:25.239 ] 00:23:25.239 }' 00:23:25.239 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.239 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.497 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:25.497 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.497 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.497 [2024-11-20 05:33:57.300780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:25.497 [2024-11-20 05:33:57.300833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:25.497 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.497 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:25.497 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.497 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.497 [2024-11-20 05:33:57.308833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:25.497 [2024-11-20 05:33:57.310658] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:25.497 [2024-11-20 05:33:57.310700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:25.497 [2024-11-20 05:33:57.310708] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:25.497 [2024-11-20 05:33:57.310719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:25.497 [2024-11-20 05:33:57.310726] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:25.497 [2024-11-20 05:33:57.310734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:25.497 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.498 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.755 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.755 "name": "Existed_Raid", 00:23:25.755 "uuid": "20207aba-b2fd-4599-a242-30833990f511", 00:23:25.755 "strip_size_kb": 64, 00:23:25.755 "state": "configuring", 00:23:25.755 "raid_level": "raid5f", 00:23:25.755 "superblock": true, 00:23:25.755 "num_base_bdevs": 4, 00:23:25.755 "num_base_bdevs_discovered": 1, 00:23:25.755 "num_base_bdevs_operational": 4, 00:23:25.755 "base_bdevs_list": [ 00:23:25.755 { 00:23:25.755 "name": "BaseBdev1", 00:23:25.755 "uuid": "83a608d9-10c9-4e5c-9e5d-8718dceebb64", 00:23:25.755 "is_configured": true, 00:23:25.755 "data_offset": 2048, 00:23:25.755 "data_size": 63488 00:23:25.755 }, 00:23:25.755 { 00:23:25.755 "name": "BaseBdev2", 00:23:25.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.755 "is_configured": false, 00:23:25.755 "data_offset": 0, 00:23:25.755 "data_size": 0 00:23:25.755 }, 00:23:25.755 { 00:23:25.755 "name": "BaseBdev3", 00:23:25.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.755 "is_configured": false, 00:23:25.755 "data_offset": 0, 00:23:25.755 "data_size": 0 00:23:25.755 }, 00:23:25.755 { 00:23:25.755 "name": "BaseBdev4", 00:23:25.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.755 "is_configured": false, 00:23:25.755 "data_offset": 0, 00:23:25.755 "data_size": 0 00:23:25.755 } 00:23:25.755 ] 00:23:25.755 }' 00:23:25.755 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.755 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 [2024-11-20 05:33:57.651532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:26.015 BaseBdev2 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 [ 00:23:26.015 { 00:23:26.015 "name": "BaseBdev2", 00:23:26.015 "aliases": [ 00:23:26.015 "0d102908-af34-44b2-81cf-35357d96a304" 00:23:26.015 ], 00:23:26.015 "product_name": "Malloc disk", 00:23:26.015 "block_size": 512, 00:23:26.015 "num_blocks": 65536, 00:23:26.015 "uuid": "0d102908-af34-44b2-81cf-35357d96a304", 00:23:26.015 "assigned_rate_limits": { 00:23:26.015 "rw_ios_per_sec": 0, 00:23:26.015 "rw_mbytes_per_sec": 0, 00:23:26.015 "r_mbytes_per_sec": 0, 00:23:26.015 "w_mbytes_per_sec": 0 00:23:26.015 }, 00:23:26.015 "claimed": true, 00:23:26.015 "claim_type": "exclusive_write", 00:23:26.015 "zoned": false, 00:23:26.015 "supported_io_types": { 00:23:26.015 "read": true, 00:23:26.015 "write": true, 00:23:26.015 "unmap": true, 00:23:26.015 "flush": true, 00:23:26.015 "reset": true, 00:23:26.015 "nvme_admin": false, 00:23:26.015 "nvme_io": false, 00:23:26.015 "nvme_io_md": false, 00:23:26.015 "write_zeroes": true, 00:23:26.015 "zcopy": true, 00:23:26.015 "get_zone_info": false, 00:23:26.015 "zone_management": false, 00:23:26.015 "zone_append": false, 00:23:26.015 "compare": false, 00:23:26.015 "compare_and_write": false, 00:23:26.015 "abort": true, 00:23:26.015 "seek_hole": false, 00:23:26.015 "seek_data": false, 00:23:26.015 "copy": true, 00:23:26.015 "nvme_iov_md": false 00:23:26.015 }, 00:23:26.015 "memory_domains": [ 00:23:26.015 { 00:23:26.015 "dma_device_id": "system", 00:23:26.015 "dma_device_type": 1 00:23:26.015 }, 00:23:26.015 { 00:23:26.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.015 "dma_device_type": 2 00:23:26.015 } 00:23:26.015 ], 00:23:26.015 "driver_specific": {} 00:23:26.015 } 00:23:26.015 ] 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.015 "name": "Existed_Raid", 00:23:26.015 "uuid": "20207aba-b2fd-4599-a242-30833990f511", 00:23:26.015 "strip_size_kb": 64, 00:23:26.015 "state": "configuring", 00:23:26.015 "raid_level": "raid5f", 00:23:26.015 "superblock": true, 00:23:26.015 "num_base_bdevs": 4, 00:23:26.015 "num_base_bdevs_discovered": 2, 00:23:26.015 "num_base_bdevs_operational": 4, 00:23:26.015 "base_bdevs_list": [ 00:23:26.015 { 00:23:26.015 "name": "BaseBdev1", 00:23:26.015 "uuid": "83a608d9-10c9-4e5c-9e5d-8718dceebb64", 00:23:26.015 "is_configured": true, 00:23:26.015 "data_offset": 2048, 00:23:26.015 "data_size": 63488 00:23:26.015 }, 00:23:26.015 { 00:23:26.015 "name": "BaseBdev2", 00:23:26.015 "uuid": "0d102908-af34-44b2-81cf-35357d96a304", 00:23:26.015 "is_configured": true, 00:23:26.015 "data_offset": 2048, 00:23:26.015 "data_size": 63488 00:23:26.015 }, 00:23:26.015 { 00:23:26.015 "name": "BaseBdev3", 00:23:26.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.015 "is_configured": false, 00:23:26.015 "data_offset": 0, 00:23:26.015 "data_size": 0 00:23:26.015 }, 00:23:26.015 { 00:23:26.015 "name": "BaseBdev4", 00:23:26.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.015 "is_configured": false, 00:23:26.015 "data_offset": 0, 00:23:26.015 "data_size": 0 00:23:26.015 } 00:23:26.015 ] 00:23:26.015 }' 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.015 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.297 05:33:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:26.297 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.297 05:33:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.297 [2024-11-20 05:33:58.048140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:26.297 BaseBdev3 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.297 [ 00:23:26.297 { 00:23:26.297 "name": "BaseBdev3", 00:23:26.297 "aliases": [ 00:23:26.297 "2a30abb1-f9c8-48c7-baef-1709a916d74b" 00:23:26.297 ], 00:23:26.297 "product_name": "Malloc disk", 00:23:26.297 "block_size": 512, 00:23:26.297 "num_blocks": 65536, 00:23:26.297 "uuid": "2a30abb1-f9c8-48c7-baef-1709a916d74b", 00:23:26.297 "assigned_rate_limits": { 00:23:26.297 "rw_ios_per_sec": 0, 00:23:26.297 "rw_mbytes_per_sec": 0, 00:23:26.297 "r_mbytes_per_sec": 0, 00:23:26.297 "w_mbytes_per_sec": 0 00:23:26.297 }, 00:23:26.297 "claimed": true, 00:23:26.297 "claim_type": "exclusive_write", 00:23:26.297 "zoned": false, 00:23:26.297 "supported_io_types": { 00:23:26.297 "read": true, 00:23:26.297 "write": true, 00:23:26.297 "unmap": true, 00:23:26.297 "flush": true, 00:23:26.297 "reset": true, 00:23:26.297 "nvme_admin": false, 00:23:26.297 "nvme_io": false, 00:23:26.297 "nvme_io_md": false, 00:23:26.297 "write_zeroes": true, 00:23:26.297 "zcopy": true, 00:23:26.297 "get_zone_info": false, 00:23:26.297 "zone_management": false, 00:23:26.297 "zone_append": false, 00:23:26.297 "compare": false, 00:23:26.297 "compare_and_write": false, 00:23:26.297 "abort": true, 00:23:26.297 "seek_hole": false, 00:23:26.297 "seek_data": false, 00:23:26.297 "copy": true, 00:23:26.297 "nvme_iov_md": false 00:23:26.297 }, 00:23:26.297 "memory_domains": [ 00:23:26.297 { 00:23:26.297 "dma_device_id": "system", 00:23:26.297 "dma_device_type": 1 00:23:26.297 }, 00:23:26.297 { 00:23:26.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.297 "dma_device_type": 2 00:23:26.297 } 00:23:26.297 ], 00:23:26.297 "driver_specific": {} 00:23:26.297 } 00:23:26.297 ] 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.297 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.297 "name": "Existed_Raid", 00:23:26.297 "uuid": "20207aba-b2fd-4599-a242-30833990f511", 00:23:26.297 "strip_size_kb": 64, 00:23:26.297 "state": "configuring", 00:23:26.297 "raid_level": "raid5f", 00:23:26.297 "superblock": true, 00:23:26.297 "num_base_bdevs": 4, 00:23:26.297 "num_base_bdevs_discovered": 3, 00:23:26.297 "num_base_bdevs_operational": 4, 00:23:26.297 "base_bdevs_list": [ 00:23:26.297 { 00:23:26.297 "name": "BaseBdev1", 00:23:26.297 "uuid": "83a608d9-10c9-4e5c-9e5d-8718dceebb64", 00:23:26.297 "is_configured": true, 00:23:26.297 "data_offset": 2048, 00:23:26.297 "data_size": 63488 00:23:26.297 }, 00:23:26.297 { 00:23:26.297 "name": "BaseBdev2", 00:23:26.297 "uuid": "0d102908-af34-44b2-81cf-35357d96a304", 00:23:26.298 "is_configured": true, 00:23:26.298 "data_offset": 2048, 00:23:26.298 "data_size": 63488 00:23:26.298 }, 00:23:26.298 { 00:23:26.298 "name": "BaseBdev3", 00:23:26.298 "uuid": "2a30abb1-f9c8-48c7-baef-1709a916d74b", 00:23:26.298 "is_configured": true, 00:23:26.298 "data_offset": 2048, 00:23:26.298 "data_size": 63488 00:23:26.298 }, 00:23:26.298 { 00:23:26.298 "name": "BaseBdev4", 00:23:26.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.298 "is_configured": false, 00:23:26.298 "data_offset": 0, 00:23:26.298 "data_size": 0 00:23:26.298 } 00:23:26.298 ] 00:23:26.298 }' 00:23:26.298 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.298 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.560 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:26.560 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.560 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.818 [2024-11-20 05:33:58.410854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:26.818 [2024-11-20 05:33:58.411099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:26.818 [2024-11-20 05:33:58.411112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:26.818 [2024-11-20 05:33:58.411392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:26.818 BaseBdev4 00:23:26.818 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.818 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:26.818 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:26.818 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:26.818 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:26.818 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:26.818 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:26.818 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:26.818 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.819 [2024-11-20 05:33:58.416407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:26.819 [2024-11-20 05:33:58.416431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:26.819 [2024-11-20 05:33:58.416660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.819 [ 00:23:26.819 { 00:23:26.819 "name": "BaseBdev4", 00:23:26.819 "aliases": [ 00:23:26.819 "38959552-9da2-498d-8f00-1587646fde94" 00:23:26.819 ], 00:23:26.819 "product_name": "Malloc disk", 00:23:26.819 "block_size": 512, 00:23:26.819 "num_blocks": 65536, 00:23:26.819 "uuid": "38959552-9da2-498d-8f00-1587646fde94", 00:23:26.819 "assigned_rate_limits": { 00:23:26.819 "rw_ios_per_sec": 0, 00:23:26.819 "rw_mbytes_per_sec": 0, 00:23:26.819 "r_mbytes_per_sec": 0, 00:23:26.819 "w_mbytes_per_sec": 0 00:23:26.819 }, 00:23:26.819 "claimed": true, 00:23:26.819 "claim_type": "exclusive_write", 00:23:26.819 "zoned": false, 00:23:26.819 "supported_io_types": { 00:23:26.819 "read": true, 00:23:26.819 "write": true, 00:23:26.819 "unmap": true, 00:23:26.819 "flush": true, 00:23:26.819 "reset": true, 00:23:26.819 "nvme_admin": false, 00:23:26.819 "nvme_io": false, 00:23:26.819 "nvme_io_md": false, 00:23:26.819 "write_zeroes": true, 00:23:26.819 "zcopy": true, 00:23:26.819 "get_zone_info": false, 00:23:26.819 "zone_management": false, 00:23:26.819 "zone_append": false, 00:23:26.819 "compare": false, 00:23:26.819 "compare_and_write": false, 00:23:26.819 "abort": true, 00:23:26.819 "seek_hole": false, 00:23:26.819 "seek_data": false, 00:23:26.819 "copy": true, 00:23:26.819 "nvme_iov_md": false 00:23:26.819 }, 00:23:26.819 "memory_domains": [ 00:23:26.819 { 00:23:26.819 "dma_device_id": "system", 00:23:26.819 "dma_device_type": 1 00:23:26.819 }, 00:23:26.819 { 00:23:26.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.819 "dma_device_type": 2 00:23:26.819 } 00:23:26.819 ], 00:23:26.819 "driver_specific": {} 00:23:26.819 } 00:23:26.819 ] 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.819 "name": "Existed_Raid", 00:23:26.819 "uuid": "20207aba-b2fd-4599-a242-30833990f511", 00:23:26.819 "strip_size_kb": 64, 00:23:26.819 "state": "online", 00:23:26.819 "raid_level": "raid5f", 00:23:26.819 "superblock": true, 00:23:26.819 "num_base_bdevs": 4, 00:23:26.819 "num_base_bdevs_discovered": 4, 00:23:26.819 "num_base_bdevs_operational": 4, 00:23:26.819 "base_bdevs_list": [ 00:23:26.819 { 00:23:26.819 "name": "BaseBdev1", 00:23:26.819 "uuid": "83a608d9-10c9-4e5c-9e5d-8718dceebb64", 00:23:26.819 "is_configured": true, 00:23:26.819 "data_offset": 2048, 00:23:26.819 "data_size": 63488 00:23:26.819 }, 00:23:26.819 { 00:23:26.819 "name": "BaseBdev2", 00:23:26.819 "uuid": "0d102908-af34-44b2-81cf-35357d96a304", 00:23:26.819 "is_configured": true, 00:23:26.819 "data_offset": 2048, 00:23:26.819 "data_size": 63488 00:23:26.819 }, 00:23:26.819 { 00:23:26.819 "name": "BaseBdev3", 00:23:26.819 "uuid": "2a30abb1-f9c8-48c7-baef-1709a916d74b", 00:23:26.819 "is_configured": true, 00:23:26.819 "data_offset": 2048, 00:23:26.819 "data_size": 63488 00:23:26.819 }, 00:23:26.819 { 00:23:26.819 "name": "BaseBdev4", 00:23:26.819 "uuid": "38959552-9da2-498d-8f00-1587646fde94", 00:23:26.819 "is_configured": true, 00:23:26.819 "data_offset": 2048, 00:23:26.819 "data_size": 63488 00:23:26.819 } 00:23:26.819 ] 00:23:26.819 }' 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.819 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.077 [2024-11-20 05:33:58.750159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.077 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:27.077 "name": "Existed_Raid", 00:23:27.077 "aliases": [ 00:23:27.077 "20207aba-b2fd-4599-a242-30833990f511" 00:23:27.077 ], 00:23:27.077 "product_name": "Raid Volume", 00:23:27.077 "block_size": 512, 00:23:27.077 "num_blocks": 190464, 00:23:27.077 "uuid": "20207aba-b2fd-4599-a242-30833990f511", 00:23:27.077 "assigned_rate_limits": { 00:23:27.077 "rw_ios_per_sec": 0, 00:23:27.077 "rw_mbytes_per_sec": 0, 00:23:27.077 "r_mbytes_per_sec": 0, 00:23:27.077 "w_mbytes_per_sec": 0 00:23:27.077 }, 00:23:27.077 "claimed": false, 00:23:27.077 "zoned": false, 00:23:27.077 "supported_io_types": { 00:23:27.077 "read": true, 00:23:27.077 "write": true, 00:23:27.077 "unmap": false, 00:23:27.077 "flush": false, 00:23:27.077 "reset": true, 00:23:27.077 "nvme_admin": false, 00:23:27.077 "nvme_io": false, 00:23:27.077 "nvme_io_md": false, 00:23:27.077 "write_zeroes": true, 00:23:27.077 "zcopy": false, 00:23:27.077 "get_zone_info": false, 00:23:27.077 "zone_management": false, 00:23:27.077 "zone_append": false, 00:23:27.077 "compare": false, 00:23:27.077 "compare_and_write": false, 00:23:27.077 "abort": false, 00:23:27.077 "seek_hole": false, 00:23:27.077 "seek_data": false, 00:23:27.077 "copy": false, 00:23:27.077 "nvme_iov_md": false 00:23:27.077 }, 00:23:27.077 "driver_specific": { 00:23:27.077 "raid": { 00:23:27.077 "uuid": "20207aba-b2fd-4599-a242-30833990f511", 00:23:27.077 "strip_size_kb": 64, 00:23:27.077 "state": "online", 00:23:27.077 "raid_level": "raid5f", 00:23:27.077 "superblock": true, 00:23:27.077 "num_base_bdevs": 4, 00:23:27.077 "num_base_bdevs_discovered": 4, 00:23:27.077 "num_base_bdevs_operational": 4, 00:23:27.077 "base_bdevs_list": [ 00:23:27.077 { 00:23:27.077 "name": "BaseBdev1", 00:23:27.077 "uuid": "83a608d9-10c9-4e5c-9e5d-8718dceebb64", 00:23:27.077 "is_configured": true, 00:23:27.077 "data_offset": 2048, 00:23:27.077 "data_size": 63488 00:23:27.077 }, 00:23:27.077 { 00:23:27.077 "name": "BaseBdev2", 00:23:27.078 "uuid": "0d102908-af34-44b2-81cf-35357d96a304", 00:23:27.078 "is_configured": true, 00:23:27.078 "data_offset": 2048, 00:23:27.078 "data_size": 63488 00:23:27.078 }, 00:23:27.078 { 00:23:27.078 "name": "BaseBdev3", 00:23:27.078 "uuid": "2a30abb1-f9c8-48c7-baef-1709a916d74b", 00:23:27.078 "is_configured": true, 00:23:27.078 "data_offset": 2048, 00:23:27.078 "data_size": 63488 00:23:27.078 }, 00:23:27.078 { 00:23:27.078 "name": "BaseBdev4", 00:23:27.078 "uuid": "38959552-9da2-498d-8f00-1587646fde94", 00:23:27.078 "is_configured": true, 00:23:27.078 "data_offset": 2048, 00:23:27.078 "data_size": 63488 00:23:27.078 } 00:23:27.078 ] 00:23:27.078 } 00:23:27.078 } 00:23:27.078 }' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:27.078 BaseBdev2 00:23:27.078 BaseBdev3 00:23:27.078 BaseBdev4' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.078 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.338 05:33:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.338 [2024-11-20 05:33:58.978038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.338 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.338 "name": "Existed_Raid", 00:23:27.338 "uuid": "20207aba-b2fd-4599-a242-30833990f511", 00:23:27.338 "strip_size_kb": 64, 00:23:27.338 "state": "online", 00:23:27.338 "raid_level": "raid5f", 00:23:27.339 "superblock": true, 00:23:27.339 "num_base_bdevs": 4, 00:23:27.339 "num_base_bdevs_discovered": 3, 00:23:27.339 "num_base_bdevs_operational": 3, 00:23:27.339 "base_bdevs_list": [ 00:23:27.339 { 00:23:27.339 "name": null, 00:23:27.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.339 "is_configured": false, 00:23:27.339 "data_offset": 0, 00:23:27.339 "data_size": 63488 00:23:27.339 }, 00:23:27.339 { 00:23:27.339 "name": "BaseBdev2", 00:23:27.339 "uuid": "0d102908-af34-44b2-81cf-35357d96a304", 00:23:27.339 "is_configured": true, 00:23:27.339 "data_offset": 2048, 00:23:27.339 "data_size": 63488 00:23:27.339 }, 00:23:27.339 { 00:23:27.339 "name": "BaseBdev3", 00:23:27.339 "uuid": "2a30abb1-f9c8-48c7-baef-1709a916d74b", 00:23:27.339 "is_configured": true, 00:23:27.339 "data_offset": 2048, 00:23:27.339 "data_size": 63488 00:23:27.339 }, 00:23:27.339 { 00:23:27.339 "name": "BaseBdev4", 00:23:27.339 "uuid": "38959552-9da2-498d-8f00-1587646fde94", 00:23:27.339 "is_configured": true, 00:23:27.339 "data_offset": 2048, 00:23:27.339 "data_size": 63488 00:23:27.339 } 00:23:27.339 ] 00:23:27.339 }' 00:23:27.339 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.339 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.596 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.596 [2024-11-20 05:33:59.400934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:27.596 [2024-11-20 05:33:59.401077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:27.855 [2024-11-20 05:33:59.459279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.855 [2024-11-20 05:33:59.499310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.855 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.856 [2024-11-20 05:33:59.598085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:27.856 [2024-11-20 05:33:59.598130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.856 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.116 BaseBdev2 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.116 [ 00:23:28.116 { 00:23:28.116 "name": "BaseBdev2", 00:23:28.116 "aliases": [ 00:23:28.116 "16ca9a57-a6c1-48a8-9498-b3e2124e201c" 00:23:28.116 ], 00:23:28.116 "product_name": "Malloc disk", 00:23:28.116 "block_size": 512, 00:23:28.116 "num_blocks": 65536, 00:23:28.116 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:28.116 "assigned_rate_limits": { 00:23:28.116 "rw_ios_per_sec": 0, 00:23:28.116 "rw_mbytes_per_sec": 0, 00:23:28.116 "r_mbytes_per_sec": 0, 00:23:28.116 "w_mbytes_per_sec": 0 00:23:28.116 }, 00:23:28.116 "claimed": false, 00:23:28.116 "zoned": false, 00:23:28.116 "supported_io_types": { 00:23:28.116 "read": true, 00:23:28.116 "write": true, 00:23:28.116 "unmap": true, 00:23:28.116 "flush": true, 00:23:28.116 "reset": true, 00:23:28.116 "nvme_admin": false, 00:23:28.116 "nvme_io": false, 00:23:28.116 "nvme_io_md": false, 00:23:28.116 "write_zeroes": true, 00:23:28.116 "zcopy": true, 00:23:28.116 "get_zone_info": false, 00:23:28.116 "zone_management": false, 00:23:28.116 "zone_append": false, 00:23:28.116 "compare": false, 00:23:28.116 "compare_and_write": false, 00:23:28.116 "abort": true, 00:23:28.116 "seek_hole": false, 00:23:28.116 "seek_data": false, 00:23:28.116 "copy": true, 00:23:28.116 "nvme_iov_md": false 00:23:28.116 }, 00:23:28.116 "memory_domains": [ 00:23:28.116 { 00:23:28.116 "dma_device_id": "system", 00:23:28.116 "dma_device_type": 1 00:23:28.116 }, 00:23:28.116 { 00:23:28.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.116 "dma_device_type": 2 00:23:28.116 } 00:23:28.116 ], 00:23:28.116 "driver_specific": {} 00:23:28.116 } 00:23:28.116 ] 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:28.116 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 BaseBdev3 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 [ 00:23:28.117 { 00:23:28.117 "name": "BaseBdev3", 00:23:28.117 "aliases": [ 00:23:28.117 "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b" 00:23:28.117 ], 00:23:28.117 "product_name": "Malloc disk", 00:23:28.117 "block_size": 512, 00:23:28.117 "num_blocks": 65536, 00:23:28.117 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:28.117 "assigned_rate_limits": { 00:23:28.117 "rw_ios_per_sec": 0, 00:23:28.117 "rw_mbytes_per_sec": 0, 00:23:28.117 "r_mbytes_per_sec": 0, 00:23:28.117 "w_mbytes_per_sec": 0 00:23:28.117 }, 00:23:28.117 "claimed": false, 00:23:28.117 "zoned": false, 00:23:28.117 "supported_io_types": { 00:23:28.117 "read": true, 00:23:28.117 "write": true, 00:23:28.117 "unmap": true, 00:23:28.117 "flush": true, 00:23:28.117 "reset": true, 00:23:28.117 "nvme_admin": false, 00:23:28.117 "nvme_io": false, 00:23:28.117 "nvme_io_md": false, 00:23:28.117 "write_zeroes": true, 00:23:28.117 "zcopy": true, 00:23:28.117 "get_zone_info": false, 00:23:28.117 "zone_management": false, 00:23:28.117 "zone_append": false, 00:23:28.117 "compare": false, 00:23:28.117 "compare_and_write": false, 00:23:28.117 "abort": true, 00:23:28.117 "seek_hole": false, 00:23:28.117 "seek_data": false, 00:23:28.117 "copy": true, 00:23:28.117 "nvme_iov_md": false 00:23:28.117 }, 00:23:28.117 "memory_domains": [ 00:23:28.117 { 00:23:28.117 "dma_device_id": "system", 00:23:28.117 "dma_device_type": 1 00:23:28.117 }, 00:23:28.117 { 00:23:28.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.117 "dma_device_type": 2 00:23:28.117 } 00:23:28.117 ], 00:23:28.117 "driver_specific": {} 00:23:28.117 } 00:23:28.117 ] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 BaseBdev4 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 [ 00:23:28.117 { 00:23:28.117 "name": "BaseBdev4", 00:23:28.117 "aliases": [ 00:23:28.117 "5d0b9804-051d-4c73-b87d-777a45dd826a" 00:23:28.117 ], 00:23:28.117 "product_name": "Malloc disk", 00:23:28.117 "block_size": 512, 00:23:28.117 "num_blocks": 65536, 00:23:28.117 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:28.117 "assigned_rate_limits": { 00:23:28.117 "rw_ios_per_sec": 0, 00:23:28.117 "rw_mbytes_per_sec": 0, 00:23:28.117 "r_mbytes_per_sec": 0, 00:23:28.117 "w_mbytes_per_sec": 0 00:23:28.117 }, 00:23:28.117 "claimed": false, 00:23:28.117 "zoned": false, 00:23:28.117 "supported_io_types": { 00:23:28.117 "read": true, 00:23:28.117 "write": true, 00:23:28.117 "unmap": true, 00:23:28.117 "flush": true, 00:23:28.117 "reset": true, 00:23:28.117 "nvme_admin": false, 00:23:28.117 "nvme_io": false, 00:23:28.117 "nvme_io_md": false, 00:23:28.117 "write_zeroes": true, 00:23:28.117 "zcopy": true, 00:23:28.117 "get_zone_info": false, 00:23:28.117 "zone_management": false, 00:23:28.117 "zone_append": false, 00:23:28.117 "compare": false, 00:23:28.117 "compare_and_write": false, 00:23:28.117 "abort": true, 00:23:28.117 "seek_hole": false, 00:23:28.117 "seek_data": false, 00:23:28.117 "copy": true, 00:23:28.117 "nvme_iov_md": false 00:23:28.117 }, 00:23:28.117 "memory_domains": [ 00:23:28.117 { 00:23:28.117 "dma_device_id": "system", 00:23:28.117 "dma_device_type": 1 00:23:28.117 }, 00:23:28.117 { 00:23:28.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.117 "dma_device_type": 2 00:23:28.117 } 00:23:28.117 ], 00:23:28.117 "driver_specific": {} 00:23:28.117 } 00:23:28.117 ] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 [2024-11-20 05:33:59.865957] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:28.117 [2024-11-20 05:33:59.866001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:28.117 [2024-11-20 05:33:59.866021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.117 [2024-11-20 05:33:59.867853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:28.117 [2024-11-20 05:33:59.868018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.117 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.118 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.118 "name": "Existed_Raid", 00:23:28.118 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:28.118 "strip_size_kb": 64, 00:23:28.118 "state": "configuring", 00:23:28.118 "raid_level": "raid5f", 00:23:28.118 "superblock": true, 00:23:28.118 "num_base_bdevs": 4, 00:23:28.118 "num_base_bdevs_discovered": 3, 00:23:28.118 "num_base_bdevs_operational": 4, 00:23:28.118 "base_bdevs_list": [ 00:23:28.118 { 00:23:28.118 "name": "BaseBdev1", 00:23:28.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.118 "is_configured": false, 00:23:28.118 "data_offset": 0, 00:23:28.118 "data_size": 0 00:23:28.118 }, 00:23:28.118 { 00:23:28.118 "name": "BaseBdev2", 00:23:28.118 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:28.118 "is_configured": true, 00:23:28.118 "data_offset": 2048, 00:23:28.118 "data_size": 63488 00:23:28.118 }, 00:23:28.118 { 00:23:28.118 "name": "BaseBdev3", 00:23:28.118 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:28.118 "is_configured": true, 00:23:28.118 "data_offset": 2048, 00:23:28.118 "data_size": 63488 00:23:28.118 }, 00:23:28.118 { 00:23:28.118 "name": "BaseBdev4", 00:23:28.118 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:28.118 "is_configured": true, 00:23:28.118 "data_offset": 2048, 00:23:28.118 "data_size": 63488 00:23:28.118 } 00:23:28.118 ] 00:23:28.118 }' 00:23:28.118 05:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.118 05:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.379 [2024-11-20 05:34:00.190001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.379 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.639 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.640 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.640 "name": "Existed_Raid", 00:23:28.640 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:28.640 "strip_size_kb": 64, 00:23:28.640 "state": "configuring", 00:23:28.640 "raid_level": "raid5f", 00:23:28.640 "superblock": true, 00:23:28.640 "num_base_bdevs": 4, 00:23:28.640 "num_base_bdevs_discovered": 2, 00:23:28.640 "num_base_bdevs_operational": 4, 00:23:28.640 "base_bdevs_list": [ 00:23:28.640 { 00:23:28.640 "name": "BaseBdev1", 00:23:28.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.640 "is_configured": false, 00:23:28.640 "data_offset": 0, 00:23:28.640 "data_size": 0 00:23:28.640 }, 00:23:28.640 { 00:23:28.640 "name": null, 00:23:28.640 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:28.640 "is_configured": false, 00:23:28.640 "data_offset": 0, 00:23:28.640 "data_size": 63488 00:23:28.640 }, 00:23:28.640 { 00:23:28.640 "name": "BaseBdev3", 00:23:28.640 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:28.640 "is_configured": true, 00:23:28.640 "data_offset": 2048, 00:23:28.640 "data_size": 63488 00:23:28.640 }, 00:23:28.640 { 00:23:28.640 "name": "BaseBdev4", 00:23:28.640 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:28.640 "is_configured": true, 00:23:28.640 "data_offset": 2048, 00:23:28.640 "data_size": 63488 00:23:28.640 } 00:23:28.640 ] 00:23:28.640 }' 00:23:28.640 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.640 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.902 [2024-11-20 05:34:00.544300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.902 BaseBdev1 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.902 [ 00:23:28.902 { 00:23:28.902 "name": "BaseBdev1", 00:23:28.902 "aliases": [ 00:23:28.902 "aed7dd7d-6458-4638-9059-f564191ee1bf" 00:23:28.902 ], 00:23:28.902 "product_name": "Malloc disk", 00:23:28.902 "block_size": 512, 00:23:28.902 "num_blocks": 65536, 00:23:28.902 "uuid": "aed7dd7d-6458-4638-9059-f564191ee1bf", 00:23:28.902 "assigned_rate_limits": { 00:23:28.902 "rw_ios_per_sec": 0, 00:23:28.902 "rw_mbytes_per_sec": 0, 00:23:28.902 "r_mbytes_per_sec": 0, 00:23:28.902 "w_mbytes_per_sec": 0 00:23:28.902 }, 00:23:28.902 "claimed": true, 00:23:28.902 "claim_type": "exclusive_write", 00:23:28.902 "zoned": false, 00:23:28.902 "supported_io_types": { 00:23:28.902 "read": true, 00:23:28.902 "write": true, 00:23:28.902 "unmap": true, 00:23:28.902 "flush": true, 00:23:28.902 "reset": true, 00:23:28.902 "nvme_admin": false, 00:23:28.902 "nvme_io": false, 00:23:28.902 "nvme_io_md": false, 00:23:28.902 "write_zeroes": true, 00:23:28.902 "zcopy": true, 00:23:28.902 "get_zone_info": false, 00:23:28.902 "zone_management": false, 00:23:28.902 "zone_append": false, 00:23:28.902 "compare": false, 00:23:28.902 "compare_and_write": false, 00:23:28.902 "abort": true, 00:23:28.902 "seek_hole": false, 00:23:28.902 "seek_data": false, 00:23:28.902 "copy": true, 00:23:28.902 "nvme_iov_md": false 00:23:28.902 }, 00:23:28.902 "memory_domains": [ 00:23:28.902 { 00:23:28.902 "dma_device_id": "system", 00:23:28.902 "dma_device_type": 1 00:23:28.902 }, 00:23:28.902 { 00:23:28.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.902 "dma_device_type": 2 00:23:28.902 } 00:23:28.902 ], 00:23:28.902 "driver_specific": {} 00:23:28.902 } 00:23:28.902 ] 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.902 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.903 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.903 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.903 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.903 "name": "Existed_Raid", 00:23:28.903 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:28.903 "strip_size_kb": 64, 00:23:28.903 "state": "configuring", 00:23:28.903 "raid_level": "raid5f", 00:23:28.903 "superblock": true, 00:23:28.903 "num_base_bdevs": 4, 00:23:28.903 "num_base_bdevs_discovered": 3, 00:23:28.903 "num_base_bdevs_operational": 4, 00:23:28.903 "base_bdevs_list": [ 00:23:28.903 { 00:23:28.903 "name": "BaseBdev1", 00:23:28.903 "uuid": "aed7dd7d-6458-4638-9059-f564191ee1bf", 00:23:28.903 "is_configured": true, 00:23:28.903 "data_offset": 2048, 00:23:28.903 "data_size": 63488 00:23:28.903 }, 00:23:28.903 { 00:23:28.903 "name": null, 00:23:28.903 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:28.903 "is_configured": false, 00:23:28.903 "data_offset": 0, 00:23:28.903 "data_size": 63488 00:23:28.903 }, 00:23:28.903 { 00:23:28.903 "name": "BaseBdev3", 00:23:28.903 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:28.903 "is_configured": true, 00:23:28.903 "data_offset": 2048, 00:23:28.903 "data_size": 63488 00:23:28.903 }, 00:23:28.903 { 00:23:28.903 "name": "BaseBdev4", 00:23:28.903 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:28.903 "is_configured": true, 00:23:28.903 "data_offset": 2048, 00:23:28.903 "data_size": 63488 00:23:28.903 } 00:23:28.903 ] 00:23:28.903 }' 00:23:28.903 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.903 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.163 [2024-11-20 05:34:00.916465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.163 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.163 "name": "Existed_Raid", 00:23:29.163 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:29.163 "strip_size_kb": 64, 00:23:29.163 "state": "configuring", 00:23:29.163 "raid_level": "raid5f", 00:23:29.163 "superblock": true, 00:23:29.163 "num_base_bdevs": 4, 00:23:29.163 "num_base_bdevs_discovered": 2, 00:23:29.163 "num_base_bdevs_operational": 4, 00:23:29.163 "base_bdevs_list": [ 00:23:29.163 { 00:23:29.163 "name": "BaseBdev1", 00:23:29.163 "uuid": "aed7dd7d-6458-4638-9059-f564191ee1bf", 00:23:29.163 "is_configured": true, 00:23:29.163 "data_offset": 2048, 00:23:29.163 "data_size": 63488 00:23:29.163 }, 00:23:29.163 { 00:23:29.163 "name": null, 00:23:29.163 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:29.163 "is_configured": false, 00:23:29.163 "data_offset": 0, 00:23:29.163 "data_size": 63488 00:23:29.163 }, 00:23:29.163 { 00:23:29.163 "name": null, 00:23:29.163 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:29.163 "is_configured": false, 00:23:29.163 "data_offset": 0, 00:23:29.163 "data_size": 63488 00:23:29.163 }, 00:23:29.163 { 00:23:29.163 "name": "BaseBdev4", 00:23:29.163 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:29.163 "is_configured": true, 00:23:29.163 "data_offset": 2048, 00:23:29.163 "data_size": 63488 00:23:29.163 } 00:23:29.163 ] 00:23:29.163 }' 00:23:29.164 05:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.164 05:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.425 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.425 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.425 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.425 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:29.425 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.705 [2024-11-20 05:34:01.272513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.705 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.705 "name": "Existed_Raid", 00:23:29.705 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:29.705 "strip_size_kb": 64, 00:23:29.705 "state": "configuring", 00:23:29.705 "raid_level": "raid5f", 00:23:29.705 "superblock": true, 00:23:29.705 "num_base_bdevs": 4, 00:23:29.705 "num_base_bdevs_discovered": 3, 00:23:29.705 "num_base_bdevs_operational": 4, 00:23:29.705 "base_bdevs_list": [ 00:23:29.705 { 00:23:29.705 "name": "BaseBdev1", 00:23:29.705 "uuid": "aed7dd7d-6458-4638-9059-f564191ee1bf", 00:23:29.705 "is_configured": true, 00:23:29.705 "data_offset": 2048, 00:23:29.705 "data_size": 63488 00:23:29.705 }, 00:23:29.705 { 00:23:29.705 "name": null, 00:23:29.705 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:29.705 "is_configured": false, 00:23:29.705 "data_offset": 0, 00:23:29.705 "data_size": 63488 00:23:29.705 }, 00:23:29.705 { 00:23:29.705 "name": "BaseBdev3", 00:23:29.705 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:29.705 "is_configured": true, 00:23:29.706 "data_offset": 2048, 00:23:29.706 "data_size": 63488 00:23:29.706 }, 00:23:29.706 { 00:23:29.706 "name": "BaseBdev4", 00:23:29.706 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:29.706 "is_configured": true, 00:23:29.706 "data_offset": 2048, 00:23:29.706 "data_size": 63488 00:23:29.706 } 00:23:29.706 ] 00:23:29.706 }' 00:23:29.706 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.706 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.017 [2024-11-20 05:34:01.608597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.017 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.017 "name": "Existed_Raid", 00:23:30.017 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:30.017 "strip_size_kb": 64, 00:23:30.017 "state": "configuring", 00:23:30.017 "raid_level": "raid5f", 00:23:30.017 "superblock": true, 00:23:30.017 "num_base_bdevs": 4, 00:23:30.017 "num_base_bdevs_discovered": 2, 00:23:30.017 "num_base_bdevs_operational": 4, 00:23:30.017 "base_bdevs_list": [ 00:23:30.017 { 00:23:30.017 "name": null, 00:23:30.017 "uuid": "aed7dd7d-6458-4638-9059-f564191ee1bf", 00:23:30.017 "is_configured": false, 00:23:30.017 "data_offset": 0, 00:23:30.017 "data_size": 63488 00:23:30.017 }, 00:23:30.017 { 00:23:30.017 "name": null, 00:23:30.017 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:30.017 "is_configured": false, 00:23:30.017 "data_offset": 0, 00:23:30.017 "data_size": 63488 00:23:30.017 }, 00:23:30.017 { 00:23:30.017 "name": "BaseBdev3", 00:23:30.017 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:30.017 "is_configured": true, 00:23:30.017 "data_offset": 2048, 00:23:30.017 "data_size": 63488 00:23:30.017 }, 00:23:30.017 { 00:23:30.017 "name": "BaseBdev4", 00:23:30.017 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:30.018 "is_configured": true, 00:23:30.018 "data_offset": 2048, 00:23:30.018 "data_size": 63488 00:23:30.018 } 00:23:30.018 ] 00:23:30.018 }' 00:23:30.018 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.018 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.279 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.279 05:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:30.279 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.279 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.279 05:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.279 [2024-11-20 05:34:02.014442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.279 "name": "Existed_Raid", 00:23:30.279 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:30.279 "strip_size_kb": 64, 00:23:30.279 "state": "configuring", 00:23:30.279 "raid_level": "raid5f", 00:23:30.279 "superblock": true, 00:23:30.279 "num_base_bdevs": 4, 00:23:30.279 "num_base_bdevs_discovered": 3, 00:23:30.279 "num_base_bdevs_operational": 4, 00:23:30.279 "base_bdevs_list": [ 00:23:30.279 { 00:23:30.279 "name": null, 00:23:30.279 "uuid": "aed7dd7d-6458-4638-9059-f564191ee1bf", 00:23:30.279 "is_configured": false, 00:23:30.279 "data_offset": 0, 00:23:30.279 "data_size": 63488 00:23:30.279 }, 00:23:30.279 { 00:23:30.279 "name": "BaseBdev2", 00:23:30.279 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:30.279 "is_configured": true, 00:23:30.279 "data_offset": 2048, 00:23:30.279 "data_size": 63488 00:23:30.279 }, 00:23:30.279 { 00:23:30.279 "name": "BaseBdev3", 00:23:30.279 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:30.279 "is_configured": true, 00:23:30.279 "data_offset": 2048, 00:23:30.279 "data_size": 63488 00:23:30.279 }, 00:23:30.279 { 00:23:30.279 "name": "BaseBdev4", 00:23:30.279 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:30.279 "is_configured": true, 00:23:30.279 "data_offset": 2048, 00:23:30.279 "data_size": 63488 00:23:30.279 } 00:23:30.279 ] 00:23:30.279 }' 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.279 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.540 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.540 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.540 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.540 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:30.540 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.540 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:30.540 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.540 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:30.541 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.541 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.541 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.541 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aed7dd7d-6458-4638-9059-f564191ee1bf 00:23:30.541 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.541 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.800 [2024-11-20 05:34:02.396552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:30.800 NewBaseBdev 00:23:30.800 [2024-11-20 05:34:02.396827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:30.800 [2024-11-20 05:34:02.396841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:30.800 [2024-11-20 05:34:02.397046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.800 [2024-11-20 05:34:02.400800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:30.800 [2024-11-20 05:34:02.400818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:30.800 [2024-11-20 05:34:02.400987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.800 [ 00:23:30.800 { 00:23:30.800 "name": "NewBaseBdev", 00:23:30.800 "aliases": [ 00:23:30.800 "aed7dd7d-6458-4638-9059-f564191ee1bf" 00:23:30.800 ], 00:23:30.800 "product_name": "Malloc disk", 00:23:30.800 "block_size": 512, 00:23:30.800 "num_blocks": 65536, 00:23:30.800 "uuid": "aed7dd7d-6458-4638-9059-f564191ee1bf", 00:23:30.800 "assigned_rate_limits": { 00:23:30.800 "rw_ios_per_sec": 0, 00:23:30.800 "rw_mbytes_per_sec": 0, 00:23:30.800 "r_mbytes_per_sec": 0, 00:23:30.800 "w_mbytes_per_sec": 0 00:23:30.800 }, 00:23:30.800 "claimed": true, 00:23:30.800 "claim_type": "exclusive_write", 00:23:30.800 "zoned": false, 00:23:30.800 "supported_io_types": { 00:23:30.800 "read": true, 00:23:30.800 "write": true, 00:23:30.800 "unmap": true, 00:23:30.800 "flush": true, 00:23:30.800 "reset": true, 00:23:30.800 "nvme_admin": false, 00:23:30.800 "nvme_io": false, 00:23:30.800 "nvme_io_md": false, 00:23:30.800 "write_zeroes": true, 00:23:30.800 "zcopy": true, 00:23:30.800 "get_zone_info": false, 00:23:30.800 "zone_management": false, 00:23:30.800 "zone_append": false, 00:23:30.800 "compare": false, 00:23:30.800 "compare_and_write": false, 00:23:30.800 "abort": true, 00:23:30.800 "seek_hole": false, 00:23:30.800 "seek_data": false, 00:23:30.800 "copy": true, 00:23:30.800 "nvme_iov_md": false 00:23:30.800 }, 00:23:30.800 "memory_domains": [ 00:23:30.800 { 00:23:30.800 "dma_device_id": "system", 00:23:30.800 "dma_device_type": 1 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.800 "dma_device_type": 2 00:23:30.800 } 00:23:30.800 ], 00:23:30.800 "driver_specific": {} 00:23:30.800 } 00:23:30.800 ] 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.800 "name": "Existed_Raid", 00:23:30.800 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:30.800 "strip_size_kb": 64, 00:23:30.800 "state": "online", 00:23:30.800 "raid_level": "raid5f", 00:23:30.800 "superblock": true, 00:23:30.800 "num_base_bdevs": 4, 00:23:30.800 "num_base_bdevs_discovered": 4, 00:23:30.800 "num_base_bdevs_operational": 4, 00:23:30.800 "base_bdevs_list": [ 00:23:30.800 { 00:23:30.800 "name": "NewBaseBdev", 00:23:30.800 "uuid": "aed7dd7d-6458-4638-9059-f564191ee1bf", 00:23:30.800 "is_configured": true, 00:23:30.800 "data_offset": 2048, 00:23:30.800 "data_size": 63488 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "name": "BaseBdev2", 00:23:30.800 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:30.800 "is_configured": true, 00:23:30.800 "data_offset": 2048, 00:23:30.800 "data_size": 63488 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "name": "BaseBdev3", 00:23:30.800 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:30.800 "is_configured": true, 00:23:30.800 "data_offset": 2048, 00:23:30.800 "data_size": 63488 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "name": "BaseBdev4", 00:23:30.800 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:30.800 "is_configured": true, 00:23:30.800 "data_offset": 2048, 00:23:30.800 "data_size": 63488 00:23:30.800 } 00:23:30.800 ] 00:23:30.800 }' 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.800 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.058 [2024-11-20 05:34:02.741544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.058 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:31.058 "name": "Existed_Raid", 00:23:31.058 "aliases": [ 00:23:31.058 "d8eb84e6-dd89-4d6d-bb29-3594def72e18" 00:23:31.058 ], 00:23:31.058 "product_name": "Raid Volume", 00:23:31.058 "block_size": 512, 00:23:31.058 "num_blocks": 190464, 00:23:31.058 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:31.058 "assigned_rate_limits": { 00:23:31.058 "rw_ios_per_sec": 0, 00:23:31.058 "rw_mbytes_per_sec": 0, 00:23:31.058 "r_mbytes_per_sec": 0, 00:23:31.058 "w_mbytes_per_sec": 0 00:23:31.058 }, 00:23:31.058 "claimed": false, 00:23:31.058 "zoned": false, 00:23:31.058 "supported_io_types": { 00:23:31.058 "read": true, 00:23:31.058 "write": true, 00:23:31.058 "unmap": false, 00:23:31.058 "flush": false, 00:23:31.058 "reset": true, 00:23:31.058 "nvme_admin": false, 00:23:31.058 "nvme_io": false, 00:23:31.058 "nvme_io_md": false, 00:23:31.058 "write_zeroes": true, 00:23:31.058 "zcopy": false, 00:23:31.058 "get_zone_info": false, 00:23:31.058 "zone_management": false, 00:23:31.058 "zone_append": false, 00:23:31.058 "compare": false, 00:23:31.058 "compare_and_write": false, 00:23:31.058 "abort": false, 00:23:31.058 "seek_hole": false, 00:23:31.058 "seek_data": false, 00:23:31.058 "copy": false, 00:23:31.058 "nvme_iov_md": false 00:23:31.058 }, 00:23:31.058 "driver_specific": { 00:23:31.058 "raid": { 00:23:31.058 "uuid": "d8eb84e6-dd89-4d6d-bb29-3594def72e18", 00:23:31.059 "strip_size_kb": 64, 00:23:31.059 "state": "online", 00:23:31.059 "raid_level": "raid5f", 00:23:31.059 "superblock": true, 00:23:31.059 "num_base_bdevs": 4, 00:23:31.059 "num_base_bdevs_discovered": 4, 00:23:31.059 "num_base_bdevs_operational": 4, 00:23:31.059 "base_bdevs_list": [ 00:23:31.059 { 00:23:31.059 "name": "NewBaseBdev", 00:23:31.059 "uuid": "aed7dd7d-6458-4638-9059-f564191ee1bf", 00:23:31.059 "is_configured": true, 00:23:31.059 "data_offset": 2048, 00:23:31.059 "data_size": 63488 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "name": "BaseBdev2", 00:23:31.059 "uuid": "16ca9a57-a6c1-48a8-9498-b3e2124e201c", 00:23:31.059 "is_configured": true, 00:23:31.059 "data_offset": 2048, 00:23:31.059 "data_size": 63488 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "name": "BaseBdev3", 00:23:31.059 "uuid": "2167d2ad-975e-4c8f-b8a8-b9fd2086d76b", 00:23:31.059 "is_configured": true, 00:23:31.059 "data_offset": 2048, 00:23:31.059 "data_size": 63488 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "name": "BaseBdev4", 00:23:31.059 "uuid": "5d0b9804-051d-4c73-b87d-777a45dd826a", 00:23:31.059 "is_configured": true, 00:23:31.059 "data_offset": 2048, 00:23:31.059 "data_size": 63488 00:23:31.059 } 00:23:31.059 ] 00:23:31.059 } 00:23:31.059 } 00:23:31.059 }' 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:31.059 BaseBdev2 00:23:31.059 BaseBdev3 00:23:31.059 BaseBdev4' 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:31.059 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:31.318 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.319 [2024-11-20 05:34:02.965392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:31.319 [2024-11-20 05:34:02.965416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:31.319 [2024-11-20 05:34:02.965475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.319 [2024-11-20 05:34:02.965711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.319 [2024-11-20 05:34:02.965723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81138 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 81138 ']' 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 81138 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81138 00:23:31.319 killing process with pid 81138 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81138' 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 81138 00:23:31.319 [2024-11-20 05:34:03.000599] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:31.319 05:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 81138 00:23:31.580 [2024-11-20 05:34:03.198158] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:32.152 05:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:32.152 00:23:32.152 real 0m8.054s 00:23:32.152 user 0m12.834s 00:23:32.152 sys 0m1.464s 00:23:32.152 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:32.152 ************************************ 00:23:32.152 END TEST raid5f_state_function_test_sb 00:23:32.152 ************************************ 00:23:32.152 05:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.152 05:34:03 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:32.152 05:34:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:32.152 05:34:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:32.152 05:34:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:32.152 ************************************ 00:23:32.152 START TEST raid5f_superblock_test 00:23:32.152 ************************************ 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81770 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81770 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81770 ']' 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.152 05:34:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:32.153 05:34:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:32.153 05:34:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.153 05:34:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:32.153 05:34:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.153 [2024-11-20 05:34:03.890759] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:32.153 [2024-11-20 05:34:03.890882] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81770 ] 00:23:32.413 [2024-11-20 05:34:04.045791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.413 [2024-11-20 05:34:04.130127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.413 [2024-11-20 05:34:04.240163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:32.414 [2024-11-20 05:34:04.240206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.987 malloc1 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.987 [2024-11-20 05:34:04.723134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:32.987 [2024-11-20 05:34:04.723191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.987 [2024-11-20 05:34:04.723209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:32.987 [2024-11-20 05:34:04.723217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.987 [2024-11-20 05:34:04.725013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.987 [2024-11-20 05:34:04.725047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:32.987 pt1 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.987 malloc2 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.987 [2024-11-20 05:34:04.754731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:32.987 [2024-11-20 05:34:04.754775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.987 [2024-11-20 05:34:04.754792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:32.987 [2024-11-20 05:34:04.754799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.987 [2024-11-20 05:34:04.756563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.987 [2024-11-20 05:34:04.756592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:32.987 pt2 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.987 malloc3 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.987 [2024-11-20 05:34:04.805530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:32.987 [2024-11-20 05:34:04.805582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.987 [2024-11-20 05:34:04.805602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:32.987 [2024-11-20 05:34:04.805609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.987 [2024-11-20 05:34:04.807356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.987 [2024-11-20 05:34:04.807399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:32.987 pt3 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:23:32.987 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.988 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.263 malloc4 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.263 [2024-11-20 05:34:04.841049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:33.263 [2024-11-20 05:34:04.841177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.263 [2024-11-20 05:34:04.841211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:33.263 [2024-11-20 05:34:04.841257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.263 [2024-11-20 05:34:04.842985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.263 [2024-11-20 05:34:04.843078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:33.263 pt4 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.263 [2024-11-20 05:34:04.849082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:33.263 [2024-11-20 05:34:04.850599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:33.263 [2024-11-20 05:34:04.850651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:33.263 [2024-11-20 05:34:04.850703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:33.263 [2024-11-20 05:34:04.850854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:33.263 [2024-11-20 05:34:04.850865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:33.263 [2024-11-20 05:34:04.851064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:33.263 [2024-11-20 05:34:04.854954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:33.263 [2024-11-20 05:34:04.854971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:33.263 [2024-11-20 05:34:04.855119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.263 "name": "raid_bdev1", 00:23:33.263 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:33.263 "strip_size_kb": 64, 00:23:33.263 "state": "online", 00:23:33.263 "raid_level": "raid5f", 00:23:33.263 "superblock": true, 00:23:33.263 "num_base_bdevs": 4, 00:23:33.263 "num_base_bdevs_discovered": 4, 00:23:33.263 "num_base_bdevs_operational": 4, 00:23:33.263 "base_bdevs_list": [ 00:23:33.263 { 00:23:33.263 "name": "pt1", 00:23:33.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:33.263 "is_configured": true, 00:23:33.263 "data_offset": 2048, 00:23:33.263 "data_size": 63488 00:23:33.263 }, 00:23:33.263 { 00:23:33.263 "name": "pt2", 00:23:33.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:33.263 "is_configured": true, 00:23:33.263 "data_offset": 2048, 00:23:33.263 "data_size": 63488 00:23:33.263 }, 00:23:33.263 { 00:23:33.263 "name": "pt3", 00:23:33.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:33.263 "is_configured": true, 00:23:33.263 "data_offset": 2048, 00:23:33.263 "data_size": 63488 00:23:33.263 }, 00:23:33.263 { 00:23:33.263 "name": "pt4", 00:23:33.263 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:33.263 "is_configured": true, 00:23:33.263 "data_offset": 2048, 00:23:33.263 "data_size": 63488 00:23:33.263 } 00:23:33.263 ] 00:23:33.263 }' 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.263 05:34:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.558 [2024-11-20 05:34:05.179636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:33.558 "name": "raid_bdev1", 00:23:33.558 "aliases": [ 00:23:33.558 "8c580b0d-1a68-4dfa-a39a-08de3311c01f" 00:23:33.558 ], 00:23:33.558 "product_name": "Raid Volume", 00:23:33.558 "block_size": 512, 00:23:33.558 "num_blocks": 190464, 00:23:33.558 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:33.558 "assigned_rate_limits": { 00:23:33.558 "rw_ios_per_sec": 0, 00:23:33.558 "rw_mbytes_per_sec": 0, 00:23:33.558 "r_mbytes_per_sec": 0, 00:23:33.558 "w_mbytes_per_sec": 0 00:23:33.558 }, 00:23:33.558 "claimed": false, 00:23:33.558 "zoned": false, 00:23:33.558 "supported_io_types": { 00:23:33.558 "read": true, 00:23:33.558 "write": true, 00:23:33.558 "unmap": false, 00:23:33.558 "flush": false, 00:23:33.558 "reset": true, 00:23:33.558 "nvme_admin": false, 00:23:33.558 "nvme_io": false, 00:23:33.558 "nvme_io_md": false, 00:23:33.558 "write_zeroes": true, 00:23:33.558 "zcopy": false, 00:23:33.558 "get_zone_info": false, 00:23:33.558 "zone_management": false, 00:23:33.558 "zone_append": false, 00:23:33.558 "compare": false, 00:23:33.558 "compare_and_write": false, 00:23:33.558 "abort": false, 00:23:33.558 "seek_hole": false, 00:23:33.558 "seek_data": false, 00:23:33.558 "copy": false, 00:23:33.558 "nvme_iov_md": false 00:23:33.558 }, 00:23:33.558 "driver_specific": { 00:23:33.558 "raid": { 00:23:33.558 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:33.558 "strip_size_kb": 64, 00:23:33.558 "state": "online", 00:23:33.558 "raid_level": "raid5f", 00:23:33.558 "superblock": true, 00:23:33.558 "num_base_bdevs": 4, 00:23:33.558 "num_base_bdevs_discovered": 4, 00:23:33.558 "num_base_bdevs_operational": 4, 00:23:33.558 "base_bdevs_list": [ 00:23:33.558 { 00:23:33.558 "name": "pt1", 00:23:33.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:33.558 "is_configured": true, 00:23:33.558 "data_offset": 2048, 00:23:33.558 "data_size": 63488 00:23:33.558 }, 00:23:33.558 { 00:23:33.558 "name": "pt2", 00:23:33.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:33.558 "is_configured": true, 00:23:33.558 "data_offset": 2048, 00:23:33.558 "data_size": 63488 00:23:33.558 }, 00:23:33.558 { 00:23:33.558 "name": "pt3", 00:23:33.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:33.558 "is_configured": true, 00:23:33.558 "data_offset": 2048, 00:23:33.558 "data_size": 63488 00:23:33.558 }, 00:23:33.558 { 00:23:33.558 "name": "pt4", 00:23:33.558 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:33.558 "is_configured": true, 00:23:33.558 "data_offset": 2048, 00:23:33.558 "data_size": 63488 00:23:33.558 } 00:23:33.558 ] 00:23:33.558 } 00:23:33.558 } 00:23:33.558 }' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:33.558 pt2 00:23:33.558 pt3 00:23:33.558 pt4' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.558 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.559 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.821 [2024-11-20 05:34:05.407704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8c580b0d-1a68-4dfa-a39a-08de3311c01f 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8c580b0d-1a68-4dfa-a39a-08de3311c01f ']' 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.821 [2024-11-20 05:34:05.435558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:33.821 [2024-11-20 05:34:05.435579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:33.821 [2024-11-20 05:34:05.435642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:33.821 [2024-11-20 05:34:05.435713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:33.821 [2024-11-20 05:34:05.435724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.821 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.822 [2024-11-20 05:34:05.551598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:33.822 [2024-11-20 05:34:05.553145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:33.822 [2024-11-20 05:34:05.553188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:33.822 [2024-11-20 05:34:05.553216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:33.822 [2024-11-20 05:34:05.553256] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:33.822 [2024-11-20 05:34:05.553294] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:33.822 [2024-11-20 05:34:05.553310] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:33.822 [2024-11-20 05:34:05.553324] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:33.822 [2024-11-20 05:34:05.553334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:33.822 [2024-11-20 05:34:05.553343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:33.822 request: 00:23:33.822 { 00:23:33.822 "name": "raid_bdev1", 00:23:33.822 "raid_level": "raid5f", 00:23:33.822 "base_bdevs": [ 00:23:33.822 "malloc1", 00:23:33.822 "malloc2", 00:23:33.822 "malloc3", 00:23:33.822 "malloc4" 00:23:33.822 ], 00:23:33.822 "strip_size_kb": 64, 00:23:33.822 "superblock": false, 00:23:33.822 "method": "bdev_raid_create", 00:23:33.822 "req_id": 1 00:23:33.822 } 00:23:33.822 Got JSON-RPC error response 00:23:33.822 response: 00:23:33.822 { 00:23:33.822 "code": -17, 00:23:33.822 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:33.822 } 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.822 [2024-11-20 05:34:05.591575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:33.822 [2024-11-20 05:34:05.591620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.822 [2024-11-20 05:34:05.591633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:33.822 [2024-11-20 05:34:05.591641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.822 [2024-11-20 05:34:05.593440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.822 [2024-11-20 05:34:05.593473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:33.822 [2024-11-20 05:34:05.593538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:33.822 [2024-11-20 05:34:05.593584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:33.822 pt1 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.822 "name": "raid_bdev1", 00:23:33.822 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:33.822 "strip_size_kb": 64, 00:23:33.822 "state": "configuring", 00:23:33.822 "raid_level": "raid5f", 00:23:33.822 "superblock": true, 00:23:33.822 "num_base_bdevs": 4, 00:23:33.822 "num_base_bdevs_discovered": 1, 00:23:33.822 "num_base_bdevs_operational": 4, 00:23:33.822 "base_bdevs_list": [ 00:23:33.822 { 00:23:33.822 "name": "pt1", 00:23:33.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:33.822 "is_configured": true, 00:23:33.822 "data_offset": 2048, 00:23:33.822 "data_size": 63488 00:23:33.822 }, 00:23:33.822 { 00:23:33.822 "name": null, 00:23:33.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:33.822 "is_configured": false, 00:23:33.822 "data_offset": 2048, 00:23:33.822 "data_size": 63488 00:23:33.822 }, 00:23:33.822 { 00:23:33.822 "name": null, 00:23:33.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:33.822 "is_configured": false, 00:23:33.822 "data_offset": 2048, 00:23:33.822 "data_size": 63488 00:23:33.822 }, 00:23:33.822 { 00:23:33.822 "name": null, 00:23:33.822 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:33.822 "is_configured": false, 00:23:33.822 "data_offset": 2048, 00:23:33.822 "data_size": 63488 00:23:33.822 } 00:23:33.822 ] 00:23:33.822 }' 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.822 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.393 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:23:34.393 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:34.393 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.393 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.393 [2024-11-20 05:34:05.923661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:34.393 [2024-11-20 05:34:05.923722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.393 [2024-11-20 05:34:05.923737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:34.393 [2024-11-20 05:34:05.923745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.393 [2024-11-20 05:34:05.924073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.393 [2024-11-20 05:34:05.924085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:34.393 [2024-11-20 05:34:05.924142] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:34.393 [2024-11-20 05:34:05.924158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:34.393 pt2 00:23:34.393 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.393 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.394 [2024-11-20 05:34:05.931658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.394 "name": "raid_bdev1", 00:23:34.394 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:34.394 "strip_size_kb": 64, 00:23:34.394 "state": "configuring", 00:23:34.394 "raid_level": "raid5f", 00:23:34.394 "superblock": true, 00:23:34.394 "num_base_bdevs": 4, 00:23:34.394 "num_base_bdevs_discovered": 1, 00:23:34.394 "num_base_bdevs_operational": 4, 00:23:34.394 "base_bdevs_list": [ 00:23:34.394 { 00:23:34.394 "name": "pt1", 00:23:34.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:34.394 "is_configured": true, 00:23:34.394 "data_offset": 2048, 00:23:34.394 "data_size": 63488 00:23:34.394 }, 00:23:34.394 { 00:23:34.394 "name": null, 00:23:34.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:34.394 "is_configured": false, 00:23:34.394 "data_offset": 0, 00:23:34.394 "data_size": 63488 00:23:34.394 }, 00:23:34.394 { 00:23:34.394 "name": null, 00:23:34.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:34.394 "is_configured": false, 00:23:34.394 "data_offset": 2048, 00:23:34.394 "data_size": 63488 00:23:34.394 }, 00:23:34.394 { 00:23:34.394 "name": null, 00:23:34.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:34.394 "is_configured": false, 00:23:34.394 "data_offset": 2048, 00:23:34.394 "data_size": 63488 00:23:34.394 } 00:23:34.394 ] 00:23:34.394 }' 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.394 05:34:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.652 [2024-11-20 05:34:06.267725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:34.652 [2024-11-20 05:34:06.267775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.652 [2024-11-20 05:34:06.267789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:34.652 [2024-11-20 05:34:06.267796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.652 [2024-11-20 05:34:06.268140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.652 [2024-11-20 05:34:06.268150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:34.652 [2024-11-20 05:34:06.268210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:34.652 [2024-11-20 05:34:06.268225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:34.652 pt2 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.652 [2024-11-20 05:34:06.275705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:34.652 [2024-11-20 05:34:06.275744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.652 [2024-11-20 05:34:06.275756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:34.652 [2024-11-20 05:34:06.275762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.652 [2024-11-20 05:34:06.276054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.652 [2024-11-20 05:34:06.276068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:34.652 [2024-11-20 05:34:06.276116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:34.652 [2024-11-20 05:34:06.276129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:34.652 pt3 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.652 [2024-11-20 05:34:06.283694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:34.652 [2024-11-20 05:34:06.283735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.652 [2024-11-20 05:34:06.283748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:34.652 [2024-11-20 05:34:06.283755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.652 [2024-11-20 05:34:06.284070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.652 [2024-11-20 05:34:06.284080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:34.652 [2024-11-20 05:34:06.284126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:34.652 [2024-11-20 05:34:06.284139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:34.652 [2024-11-20 05:34:06.284245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:34.652 [2024-11-20 05:34:06.284251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:34.652 [2024-11-20 05:34:06.284459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:34.652 [2024-11-20 05:34:06.288047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:34.652 [2024-11-20 05:34:06.288065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:34.652 [2024-11-20 05:34:06.288200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.652 pt4 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.652 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.652 "name": "raid_bdev1", 00:23:34.652 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:34.652 "strip_size_kb": 64, 00:23:34.652 "state": "online", 00:23:34.652 "raid_level": "raid5f", 00:23:34.652 "superblock": true, 00:23:34.652 "num_base_bdevs": 4, 00:23:34.652 "num_base_bdevs_discovered": 4, 00:23:34.652 "num_base_bdevs_operational": 4, 00:23:34.652 "base_bdevs_list": [ 00:23:34.653 { 00:23:34.653 "name": "pt1", 00:23:34.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:34.653 "is_configured": true, 00:23:34.653 "data_offset": 2048, 00:23:34.653 "data_size": 63488 00:23:34.653 }, 00:23:34.653 { 00:23:34.653 "name": "pt2", 00:23:34.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:34.653 "is_configured": true, 00:23:34.653 "data_offset": 2048, 00:23:34.653 "data_size": 63488 00:23:34.653 }, 00:23:34.653 { 00:23:34.653 "name": "pt3", 00:23:34.653 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:34.653 "is_configured": true, 00:23:34.653 "data_offset": 2048, 00:23:34.653 "data_size": 63488 00:23:34.653 }, 00:23:34.653 { 00:23:34.653 "name": "pt4", 00:23:34.653 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:34.653 "is_configured": true, 00:23:34.653 "data_offset": 2048, 00:23:34.653 "data_size": 63488 00:23:34.653 } 00:23:34.653 ] 00:23:34.653 }' 00:23:34.653 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.653 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.911 [2024-11-20 05:34:06.596803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.911 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:34.911 "name": "raid_bdev1", 00:23:34.911 "aliases": [ 00:23:34.911 "8c580b0d-1a68-4dfa-a39a-08de3311c01f" 00:23:34.911 ], 00:23:34.911 "product_name": "Raid Volume", 00:23:34.911 "block_size": 512, 00:23:34.911 "num_blocks": 190464, 00:23:34.911 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:34.911 "assigned_rate_limits": { 00:23:34.911 "rw_ios_per_sec": 0, 00:23:34.911 "rw_mbytes_per_sec": 0, 00:23:34.911 "r_mbytes_per_sec": 0, 00:23:34.911 "w_mbytes_per_sec": 0 00:23:34.911 }, 00:23:34.911 "claimed": false, 00:23:34.911 "zoned": false, 00:23:34.911 "supported_io_types": { 00:23:34.911 "read": true, 00:23:34.911 "write": true, 00:23:34.911 "unmap": false, 00:23:34.911 "flush": false, 00:23:34.911 "reset": true, 00:23:34.911 "nvme_admin": false, 00:23:34.911 "nvme_io": false, 00:23:34.911 "nvme_io_md": false, 00:23:34.911 "write_zeroes": true, 00:23:34.911 "zcopy": false, 00:23:34.911 "get_zone_info": false, 00:23:34.911 "zone_management": false, 00:23:34.911 "zone_append": false, 00:23:34.911 "compare": false, 00:23:34.911 "compare_and_write": false, 00:23:34.911 "abort": false, 00:23:34.911 "seek_hole": false, 00:23:34.911 "seek_data": false, 00:23:34.911 "copy": false, 00:23:34.911 "nvme_iov_md": false 00:23:34.911 }, 00:23:34.911 "driver_specific": { 00:23:34.911 "raid": { 00:23:34.911 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:34.911 "strip_size_kb": 64, 00:23:34.911 "state": "online", 00:23:34.911 "raid_level": "raid5f", 00:23:34.911 "superblock": true, 00:23:34.911 "num_base_bdevs": 4, 00:23:34.911 "num_base_bdevs_discovered": 4, 00:23:34.911 "num_base_bdevs_operational": 4, 00:23:34.911 "base_bdevs_list": [ 00:23:34.911 { 00:23:34.911 "name": "pt1", 00:23:34.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:34.911 "is_configured": true, 00:23:34.911 "data_offset": 2048, 00:23:34.911 "data_size": 63488 00:23:34.911 }, 00:23:34.911 { 00:23:34.911 "name": "pt2", 00:23:34.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:34.911 "is_configured": true, 00:23:34.911 "data_offset": 2048, 00:23:34.911 "data_size": 63488 00:23:34.911 }, 00:23:34.911 { 00:23:34.911 "name": "pt3", 00:23:34.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:34.911 "is_configured": true, 00:23:34.911 "data_offset": 2048, 00:23:34.911 "data_size": 63488 00:23:34.911 }, 00:23:34.911 { 00:23:34.911 "name": "pt4", 00:23:34.911 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:34.911 "is_configured": true, 00:23:34.911 "data_offset": 2048, 00:23:34.912 "data_size": 63488 00:23:34.912 } 00:23:34.912 ] 00:23:34.912 } 00:23:34.912 } 00:23:34.912 }' 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:34.912 pt2 00:23:34.912 pt3 00:23:34.912 pt4' 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.912 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.187 [2024-11-20 05:34:06.832811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8c580b0d-1a68-4dfa-a39a-08de3311c01f '!=' 8c580b0d-1a68-4dfa-a39a-08de3311c01f ']' 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.187 [2024-11-20 05:34:06.852736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.187 "name": "raid_bdev1", 00:23:35.187 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:35.187 "strip_size_kb": 64, 00:23:35.187 "state": "online", 00:23:35.187 "raid_level": "raid5f", 00:23:35.187 "superblock": true, 00:23:35.187 "num_base_bdevs": 4, 00:23:35.187 "num_base_bdevs_discovered": 3, 00:23:35.187 "num_base_bdevs_operational": 3, 00:23:35.187 "base_bdevs_list": [ 00:23:35.187 { 00:23:35.187 "name": null, 00:23:35.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.187 "is_configured": false, 00:23:35.187 "data_offset": 0, 00:23:35.187 "data_size": 63488 00:23:35.187 }, 00:23:35.187 { 00:23:35.187 "name": "pt2", 00:23:35.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:35.187 "is_configured": true, 00:23:35.187 "data_offset": 2048, 00:23:35.187 "data_size": 63488 00:23:35.187 }, 00:23:35.187 { 00:23:35.187 "name": "pt3", 00:23:35.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:35.187 "is_configured": true, 00:23:35.187 "data_offset": 2048, 00:23:35.187 "data_size": 63488 00:23:35.187 }, 00:23:35.187 { 00:23:35.187 "name": "pt4", 00:23:35.187 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:35.187 "is_configured": true, 00:23:35.187 "data_offset": 2048, 00:23:35.187 "data_size": 63488 00:23:35.187 } 00:23:35.187 ] 00:23:35.187 }' 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.187 05:34:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.446 [2024-11-20 05:34:07.172719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.446 [2024-11-20 05:34:07.172743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:35.446 [2024-11-20 05:34:07.172796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.446 [2024-11-20 05:34:07.172858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.446 [2024-11-20 05:34:07.172866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.446 [2024-11-20 05:34:07.240735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:35.446 [2024-11-20 05:34:07.240779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.446 [2024-11-20 05:34:07.240794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:35.446 [2024-11-20 05:34:07.240801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.446 [2024-11-20 05:34:07.242610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.446 [2024-11-20 05:34:07.242634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:35.446 [2024-11-20 05:34:07.242695] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:35.446 [2024-11-20 05:34:07.242729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:35.446 pt2 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.446 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.739 "name": "raid_bdev1", 00:23:35.739 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:35.739 "strip_size_kb": 64, 00:23:35.739 "state": "configuring", 00:23:35.739 "raid_level": "raid5f", 00:23:35.739 "superblock": true, 00:23:35.739 "num_base_bdevs": 4, 00:23:35.739 "num_base_bdevs_discovered": 1, 00:23:35.739 "num_base_bdevs_operational": 3, 00:23:35.739 "base_bdevs_list": [ 00:23:35.739 { 00:23:35.739 "name": null, 00:23:35.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.739 "is_configured": false, 00:23:35.739 "data_offset": 2048, 00:23:35.739 "data_size": 63488 00:23:35.739 }, 00:23:35.739 { 00:23:35.739 "name": "pt2", 00:23:35.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:35.739 "is_configured": true, 00:23:35.739 "data_offset": 2048, 00:23:35.739 "data_size": 63488 00:23:35.739 }, 00:23:35.739 { 00:23:35.739 "name": null, 00:23:35.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:35.739 "is_configured": false, 00:23:35.739 "data_offset": 2048, 00:23:35.739 "data_size": 63488 00:23:35.739 }, 00:23:35.739 { 00:23:35.739 "name": null, 00:23:35.739 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:35.739 "is_configured": false, 00:23:35.739 "data_offset": 2048, 00:23:35.739 "data_size": 63488 00:23:35.739 } 00:23:35.739 ] 00:23:35.739 }' 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.739 [2024-11-20 05:34:07.552813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:35.739 [2024-11-20 05:34:07.552864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.739 [2024-11-20 05:34:07.552880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:35.739 [2024-11-20 05:34:07.552887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.739 [2024-11-20 05:34:07.553222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.739 [2024-11-20 05:34:07.553233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:35.739 [2024-11-20 05:34:07.553294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:35.739 [2024-11-20 05:34:07.553325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:35.739 pt3 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.739 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.998 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.998 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.998 "name": "raid_bdev1", 00:23:35.998 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:35.998 "strip_size_kb": 64, 00:23:35.998 "state": "configuring", 00:23:35.998 "raid_level": "raid5f", 00:23:35.998 "superblock": true, 00:23:35.998 "num_base_bdevs": 4, 00:23:35.998 "num_base_bdevs_discovered": 2, 00:23:35.998 "num_base_bdevs_operational": 3, 00:23:35.998 "base_bdevs_list": [ 00:23:35.998 { 00:23:35.998 "name": null, 00:23:35.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.998 "is_configured": false, 00:23:35.998 "data_offset": 2048, 00:23:35.998 "data_size": 63488 00:23:35.998 }, 00:23:35.998 { 00:23:35.998 "name": "pt2", 00:23:35.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:35.998 "is_configured": true, 00:23:35.998 "data_offset": 2048, 00:23:35.998 "data_size": 63488 00:23:35.998 }, 00:23:35.998 { 00:23:35.998 "name": "pt3", 00:23:35.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:35.998 "is_configured": true, 00:23:35.998 "data_offset": 2048, 00:23:35.998 "data_size": 63488 00:23:35.998 }, 00:23:35.998 { 00:23:35.998 "name": null, 00:23:35.998 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:35.998 "is_configured": false, 00:23:35.998 "data_offset": 2048, 00:23:35.998 "data_size": 63488 00:23:35.998 } 00:23:35.998 ] 00:23:35.998 }' 00:23:35.998 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.998 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.257 [2024-11-20 05:34:07.880882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:36.257 [2024-11-20 05:34:07.880932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.257 [2024-11-20 05:34:07.880948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:36.257 [2024-11-20 05:34:07.880955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.257 [2024-11-20 05:34:07.881285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.257 [2024-11-20 05:34:07.881295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:36.257 [2024-11-20 05:34:07.881352] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:36.257 [2024-11-20 05:34:07.881389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:36.257 [2024-11-20 05:34:07.881493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:36.257 [2024-11-20 05:34:07.881500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:36.257 [2024-11-20 05:34:07.881694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:36.257 [2024-11-20 05:34:07.885535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:36.257 [2024-11-20 05:34:07.885555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:36.257 [2024-11-20 05:34:07.885777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.257 pt4 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.257 "name": "raid_bdev1", 00:23:36.257 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:36.257 "strip_size_kb": 64, 00:23:36.257 "state": "online", 00:23:36.257 "raid_level": "raid5f", 00:23:36.257 "superblock": true, 00:23:36.257 "num_base_bdevs": 4, 00:23:36.257 "num_base_bdevs_discovered": 3, 00:23:36.257 "num_base_bdevs_operational": 3, 00:23:36.257 "base_bdevs_list": [ 00:23:36.257 { 00:23:36.257 "name": null, 00:23:36.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.257 "is_configured": false, 00:23:36.257 "data_offset": 2048, 00:23:36.257 "data_size": 63488 00:23:36.257 }, 00:23:36.257 { 00:23:36.257 "name": "pt2", 00:23:36.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:36.257 "is_configured": true, 00:23:36.257 "data_offset": 2048, 00:23:36.257 "data_size": 63488 00:23:36.257 }, 00:23:36.257 { 00:23:36.257 "name": "pt3", 00:23:36.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:36.257 "is_configured": true, 00:23:36.257 "data_offset": 2048, 00:23:36.257 "data_size": 63488 00:23:36.257 }, 00:23:36.257 { 00:23:36.257 "name": "pt4", 00:23:36.257 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:36.257 "is_configured": true, 00:23:36.257 "data_offset": 2048, 00:23:36.257 "data_size": 63488 00:23:36.257 } 00:23:36.257 ] 00:23:36.257 }' 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.257 05:34:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 [2024-11-20 05:34:08.214097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.516 [2024-11-20 05:34:08.214117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:36.516 [2024-11-20 05:34:08.214171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.516 [2024-11-20 05:34:08.214228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:36.516 [2024-11-20 05:34:08.214237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 [2024-11-20 05:34:08.262106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:36.516 [2024-11-20 05:34:08.262157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.516 [2024-11-20 05:34:08.262174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:36.516 [2024-11-20 05:34:08.262183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.516 [2024-11-20 05:34:08.264051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.516 [2024-11-20 05:34:08.264178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:36.516 [2024-11-20 05:34:08.264252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:36.516 [2024-11-20 05:34:08.264294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:36.516 [2024-11-20 05:34:08.264418] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:36.516 [2024-11-20 05:34:08.264429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.516 [2024-11-20 05:34:08.264442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:36.516 [2024-11-20 05:34:08.264484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:36.516 [2024-11-20 05:34:08.264569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:36.516 pt1 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.516 "name": "raid_bdev1", 00:23:36.516 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:36.516 "strip_size_kb": 64, 00:23:36.516 "state": "configuring", 00:23:36.516 "raid_level": "raid5f", 00:23:36.516 "superblock": true, 00:23:36.516 "num_base_bdevs": 4, 00:23:36.516 "num_base_bdevs_discovered": 2, 00:23:36.516 "num_base_bdevs_operational": 3, 00:23:36.516 "base_bdevs_list": [ 00:23:36.516 { 00:23:36.516 "name": null, 00:23:36.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.516 "is_configured": false, 00:23:36.516 "data_offset": 2048, 00:23:36.516 "data_size": 63488 00:23:36.516 }, 00:23:36.516 { 00:23:36.516 "name": "pt2", 00:23:36.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:36.516 "is_configured": true, 00:23:36.516 "data_offset": 2048, 00:23:36.516 "data_size": 63488 00:23:36.516 }, 00:23:36.516 { 00:23:36.516 "name": "pt3", 00:23:36.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:36.516 "is_configured": true, 00:23:36.516 "data_offset": 2048, 00:23:36.516 "data_size": 63488 00:23:36.516 }, 00:23:36.516 { 00:23:36.516 "name": null, 00:23:36.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:36.516 "is_configured": false, 00:23:36.516 "data_offset": 2048, 00:23:36.516 "data_size": 63488 00:23:36.516 } 00:23:36.516 ] 00:23:36.516 }' 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.516 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.853 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:23:36.853 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:36.853 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.853 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.853 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.854 [2024-11-20 05:34:08.590191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:36.854 [2024-11-20 05:34:08.590241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.854 [2024-11-20 05:34:08.590259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:36.854 [2024-11-20 05:34:08.590266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.854 [2024-11-20 05:34:08.590610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.854 [2024-11-20 05:34:08.590621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:36.854 [2024-11-20 05:34:08.590678] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:36.854 [2024-11-20 05:34:08.590693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:36.854 [2024-11-20 05:34:08.590788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:36.854 [2024-11-20 05:34:08.590795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:36.854 [2024-11-20 05:34:08.590997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:36.854 [2024-11-20 05:34:08.594725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:36.854 [2024-11-20 05:34:08.594743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:36.854 [2024-11-20 05:34:08.594941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.854 pt4 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.854 "name": "raid_bdev1", 00:23:36.854 "uuid": "8c580b0d-1a68-4dfa-a39a-08de3311c01f", 00:23:36.854 "strip_size_kb": 64, 00:23:36.854 "state": "online", 00:23:36.854 "raid_level": "raid5f", 00:23:36.854 "superblock": true, 00:23:36.854 "num_base_bdevs": 4, 00:23:36.854 "num_base_bdevs_discovered": 3, 00:23:36.854 "num_base_bdevs_operational": 3, 00:23:36.854 "base_bdevs_list": [ 00:23:36.854 { 00:23:36.854 "name": null, 00:23:36.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.854 "is_configured": false, 00:23:36.854 "data_offset": 2048, 00:23:36.854 "data_size": 63488 00:23:36.854 }, 00:23:36.854 { 00:23:36.854 "name": "pt2", 00:23:36.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:36.854 "is_configured": true, 00:23:36.854 "data_offset": 2048, 00:23:36.854 "data_size": 63488 00:23:36.854 }, 00:23:36.854 { 00:23:36.854 "name": "pt3", 00:23:36.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:36.854 "is_configured": true, 00:23:36.854 "data_offset": 2048, 00:23:36.854 "data_size": 63488 00:23:36.854 }, 00:23:36.854 { 00:23:36.854 "name": "pt4", 00:23:36.854 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:36.854 "is_configured": true, 00:23:36.854 "data_offset": 2048, 00:23:36.854 "data_size": 63488 00:23:36.854 } 00:23:36.854 ] 00:23:36.854 }' 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.854 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.113 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:37.113 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.113 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.113 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:37.113 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:37.373 [2024-11-20 05:34:08.959388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8c580b0d-1a68-4dfa-a39a-08de3311c01f '!=' 8c580b0d-1a68-4dfa-a39a-08de3311c01f ']' 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81770 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81770 ']' 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81770 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:37.373 05:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81770 00:23:37.373 killing process with pid 81770 00:23:37.373 05:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:37.373 05:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:37.373 05:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81770' 00:23:37.373 05:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81770 00:23:37.373 05:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81770 00:23:37.373 [2024-11-20 05:34:09.008951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:37.373 [2024-11-20 05:34:09.009021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:37.373 [2024-11-20 05:34:09.009080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:37.373 [2024-11-20 05:34:09.009094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:37.631 [2024-11-20 05:34:09.205954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:38.197 05:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:38.197 00:23:38.197 real 0m5.949s 00:23:38.197 user 0m9.438s 00:23:38.197 sys 0m1.042s 00:23:38.197 05:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:38.197 ************************************ 00:23:38.197 END TEST raid5f_superblock_test 00:23:38.197 ************************************ 00:23:38.197 05:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.197 05:34:09 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:23:38.197 05:34:09 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:23:38.197 05:34:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:38.197 05:34:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:38.197 05:34:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:38.197 ************************************ 00:23:38.197 START TEST raid5f_rebuild_test 00:23:38.197 ************************************ 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:38.197 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82233 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82233 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 82233 ']' 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.198 05:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:38.198 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:38.198 Zero copy mechanism will not be used. 00:23:38.198 [2024-11-20 05:34:09.890029] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:38.198 [2024-11-20 05:34:09.890146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82233 ] 00:23:38.455 [2024-11-20 05:34:10.046288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.455 [2024-11-20 05:34:10.133180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.455 [2024-11-20 05:34:10.244297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.455 [2024-11-20 05:34:10.244324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.021 BaseBdev1_malloc 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.021 [2024-11-20 05:34:10.764813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:39.021 [2024-11-20 05:34:10.764879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.021 [2024-11-20 05:34:10.764905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:39.021 [2024-11-20 05:34:10.764919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.021 [2024-11-20 05:34:10.766830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.021 [2024-11-20 05:34:10.766866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:39.021 BaseBdev1 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.021 BaseBdev2_malloc 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.021 [2024-11-20 05:34:10.797181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:39.021 [2024-11-20 05:34:10.797238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.021 [2024-11-20 05:34:10.797253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:39.021 [2024-11-20 05:34:10.797262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.021 [2024-11-20 05:34:10.799003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.021 [2024-11-20 05:34:10.799036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:39.021 BaseBdev2 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.021 BaseBdev3_malloc 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.021 [2024-11-20 05:34:10.839983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:39.021 [2024-11-20 05:34:10.840029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.021 [2024-11-20 05:34:10.840046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:39.021 [2024-11-20 05:34:10.840055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.021 [2024-11-20 05:34:10.841796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.021 [2024-11-20 05:34:10.841828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:39.021 BaseBdev3 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:39.021 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.022 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.280 BaseBdev4_malloc 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.280 [2024-11-20 05:34:10.872667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:39.280 [2024-11-20 05:34:10.872715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.280 [2024-11-20 05:34:10.872730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:39.280 [2024-11-20 05:34:10.872739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.280 [2024-11-20 05:34:10.874544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.280 [2024-11-20 05:34:10.874579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:39.280 BaseBdev4 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.280 spare_malloc 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.280 spare_delay 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.280 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.281 [2024-11-20 05:34:10.912215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:39.281 [2024-11-20 05:34:10.912399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.281 [2024-11-20 05:34:10.912422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:39.281 [2024-11-20 05:34:10.912431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.281 [2024-11-20 05:34:10.914174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.281 [2024-11-20 05:34:10.914206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:39.281 spare 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.281 [2024-11-20 05:34:10.920255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:39.281 [2024-11-20 05:34:10.921869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:39.281 [2024-11-20 05:34:10.921922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:39.281 [2024-11-20 05:34:10.921963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:39.281 [2024-11-20 05:34:10.922033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:39.281 [2024-11-20 05:34:10.922044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:39.281 [2024-11-20 05:34:10.922254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:39.281 [2024-11-20 05:34:10.926144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:39.281 [2024-11-20 05:34:10.926160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:39.281 [2024-11-20 05:34:10.926318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.281 "name": "raid_bdev1", 00:23:39.281 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:39.281 "strip_size_kb": 64, 00:23:39.281 "state": "online", 00:23:39.281 "raid_level": "raid5f", 00:23:39.281 "superblock": false, 00:23:39.281 "num_base_bdevs": 4, 00:23:39.281 "num_base_bdevs_discovered": 4, 00:23:39.281 "num_base_bdevs_operational": 4, 00:23:39.281 "base_bdevs_list": [ 00:23:39.281 { 00:23:39.281 "name": "BaseBdev1", 00:23:39.281 "uuid": "30dc0f3a-61e3-5494-9647-8cf212378ee5", 00:23:39.281 "is_configured": true, 00:23:39.281 "data_offset": 0, 00:23:39.281 "data_size": 65536 00:23:39.281 }, 00:23:39.281 { 00:23:39.281 "name": "BaseBdev2", 00:23:39.281 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:39.281 "is_configured": true, 00:23:39.281 "data_offset": 0, 00:23:39.281 "data_size": 65536 00:23:39.281 }, 00:23:39.281 { 00:23:39.281 "name": "BaseBdev3", 00:23:39.281 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:39.281 "is_configured": true, 00:23:39.281 "data_offset": 0, 00:23:39.281 "data_size": 65536 00:23:39.281 }, 00:23:39.281 { 00:23:39.281 "name": "BaseBdev4", 00:23:39.281 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:39.281 "is_configured": true, 00:23:39.281 "data_offset": 0, 00:23:39.281 "data_size": 65536 00:23:39.281 } 00:23:39.281 ] 00:23:39.281 }' 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.281 05:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.539 [2024-11-20 05:34:11.258779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:39.539 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:39.540 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:39.540 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:39.540 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:39.540 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:39.540 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:39.540 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:39.540 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:39.798 [2024-11-20 05:34:11.490671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:39.798 /dev/nbd0 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.798 1+0 records in 00:23:39.798 1+0 records out 00:23:39.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015707 s, 26.1 MB/s 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:39.798 05:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:40.364 512+0 records in 00:23:40.364 512+0 records out 00:23:40.364 100663296 bytes (101 MB, 96 MiB) copied, 0.487666 s, 206 MB/s 00:23:40.364 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:40.364 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:40.364 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:40.364 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.364 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:40.364 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.364 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:40.622 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:40.623 [2024-11-20 05:34:12.239322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.623 [2024-11-20 05:34:12.251657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.623 "name": "raid_bdev1", 00:23:40.623 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:40.623 "strip_size_kb": 64, 00:23:40.623 "state": "online", 00:23:40.623 "raid_level": "raid5f", 00:23:40.623 "superblock": false, 00:23:40.623 "num_base_bdevs": 4, 00:23:40.623 "num_base_bdevs_discovered": 3, 00:23:40.623 "num_base_bdevs_operational": 3, 00:23:40.623 "base_bdevs_list": [ 00:23:40.623 { 00:23:40.623 "name": null, 00:23:40.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.623 "is_configured": false, 00:23:40.623 "data_offset": 0, 00:23:40.623 "data_size": 65536 00:23:40.623 }, 00:23:40.623 { 00:23:40.623 "name": "BaseBdev2", 00:23:40.623 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:40.623 "is_configured": true, 00:23:40.623 "data_offset": 0, 00:23:40.623 "data_size": 65536 00:23:40.623 }, 00:23:40.623 { 00:23:40.623 "name": "BaseBdev3", 00:23:40.623 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:40.623 "is_configured": true, 00:23:40.623 "data_offset": 0, 00:23:40.623 "data_size": 65536 00:23:40.623 }, 00:23:40.623 { 00:23:40.623 "name": "BaseBdev4", 00:23:40.623 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:40.623 "is_configured": true, 00:23:40.623 "data_offset": 0, 00:23:40.623 "data_size": 65536 00:23:40.623 } 00:23:40.623 ] 00:23:40.623 }' 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.623 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.881 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:40.881 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.881 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.881 [2024-11-20 05:34:12.583705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:40.881 [2024-11-20 05:34:12.591842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:23:40.881 05:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.881 05:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:40.881 [2024-11-20 05:34:12.597193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.815 "name": "raid_bdev1", 00:23:41.815 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:41.815 "strip_size_kb": 64, 00:23:41.815 "state": "online", 00:23:41.815 "raid_level": "raid5f", 00:23:41.815 "superblock": false, 00:23:41.815 "num_base_bdevs": 4, 00:23:41.815 "num_base_bdevs_discovered": 4, 00:23:41.815 "num_base_bdevs_operational": 4, 00:23:41.815 "process": { 00:23:41.815 "type": "rebuild", 00:23:41.815 "target": "spare", 00:23:41.815 "progress": { 00:23:41.815 "blocks": 19200, 00:23:41.815 "percent": 9 00:23:41.815 } 00:23:41.815 }, 00:23:41.815 "base_bdevs_list": [ 00:23:41.815 { 00:23:41.815 "name": "spare", 00:23:41.815 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:41.815 "is_configured": true, 00:23:41.815 "data_offset": 0, 00:23:41.815 "data_size": 65536 00:23:41.815 }, 00:23:41.815 { 00:23:41.815 "name": "BaseBdev2", 00:23:41.815 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:41.815 "is_configured": true, 00:23:41.815 "data_offset": 0, 00:23:41.815 "data_size": 65536 00:23:41.815 }, 00:23:41.815 { 00:23:41.815 "name": "BaseBdev3", 00:23:41.815 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:41.815 "is_configured": true, 00:23:41.815 "data_offset": 0, 00:23:41.815 "data_size": 65536 00:23:41.815 }, 00:23:41.815 { 00:23:41.815 "name": "BaseBdev4", 00:23:41.815 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:41.815 "is_configured": true, 00:23:41.815 "data_offset": 0, 00:23:41.815 "data_size": 65536 00:23:41.815 } 00:23:41.815 ] 00:23:41.815 }' 00:23:41.815 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.074 [2024-11-20 05:34:13.702127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:42.074 [2024-11-20 05:34:13.704742] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:42.074 [2024-11-20 05:34:13.704880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:42.074 [2024-11-20 05:34:13.704897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:42.074 [2024-11-20 05:34:13.704906] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.074 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.074 "name": "raid_bdev1", 00:23:42.074 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:42.074 "strip_size_kb": 64, 00:23:42.074 "state": "online", 00:23:42.074 "raid_level": "raid5f", 00:23:42.074 "superblock": false, 00:23:42.074 "num_base_bdevs": 4, 00:23:42.074 "num_base_bdevs_discovered": 3, 00:23:42.074 "num_base_bdevs_operational": 3, 00:23:42.074 "base_bdevs_list": [ 00:23:42.074 { 00:23:42.074 "name": null, 00:23:42.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.074 "is_configured": false, 00:23:42.074 "data_offset": 0, 00:23:42.074 "data_size": 65536 00:23:42.074 }, 00:23:42.074 { 00:23:42.074 "name": "BaseBdev2", 00:23:42.074 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:42.074 "is_configured": true, 00:23:42.074 "data_offset": 0, 00:23:42.074 "data_size": 65536 00:23:42.074 }, 00:23:42.074 { 00:23:42.074 "name": "BaseBdev3", 00:23:42.074 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:42.074 "is_configured": true, 00:23:42.074 "data_offset": 0, 00:23:42.074 "data_size": 65536 00:23:42.074 }, 00:23:42.074 { 00:23:42.074 "name": "BaseBdev4", 00:23:42.074 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:42.074 "is_configured": true, 00:23:42.074 "data_offset": 0, 00:23:42.074 "data_size": 65536 00:23:42.074 } 00:23:42.074 ] 00:23:42.075 }' 00:23:42.075 05:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.075 05:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.333 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.333 "name": "raid_bdev1", 00:23:42.333 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:42.333 "strip_size_kb": 64, 00:23:42.333 "state": "online", 00:23:42.333 "raid_level": "raid5f", 00:23:42.333 "superblock": false, 00:23:42.334 "num_base_bdevs": 4, 00:23:42.334 "num_base_bdevs_discovered": 3, 00:23:42.334 "num_base_bdevs_operational": 3, 00:23:42.334 "base_bdevs_list": [ 00:23:42.334 { 00:23:42.334 "name": null, 00:23:42.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.334 "is_configured": false, 00:23:42.334 "data_offset": 0, 00:23:42.334 "data_size": 65536 00:23:42.334 }, 00:23:42.334 { 00:23:42.334 "name": "BaseBdev2", 00:23:42.334 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:42.334 "is_configured": true, 00:23:42.334 "data_offset": 0, 00:23:42.334 "data_size": 65536 00:23:42.334 }, 00:23:42.334 { 00:23:42.334 "name": "BaseBdev3", 00:23:42.334 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:42.334 "is_configured": true, 00:23:42.334 "data_offset": 0, 00:23:42.334 "data_size": 65536 00:23:42.334 }, 00:23:42.334 { 00:23:42.334 "name": "BaseBdev4", 00:23:42.334 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:42.334 "is_configured": true, 00:23:42.334 "data_offset": 0, 00:23:42.334 "data_size": 65536 00:23:42.334 } 00:23:42.334 ] 00:23:42.334 }' 00:23:42.334 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.334 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:42.334 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.334 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:42.334 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:42.334 05:34:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.334 05:34:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.334 [2024-11-20 05:34:14.141254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.334 [2024-11-20 05:34:14.148977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:23:42.334 05:34:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.334 05:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:42.334 [2024-11-20 05:34:14.154164] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.755 "name": "raid_bdev1", 00:23:43.755 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:43.755 "strip_size_kb": 64, 00:23:43.755 "state": "online", 00:23:43.755 "raid_level": "raid5f", 00:23:43.755 "superblock": false, 00:23:43.755 "num_base_bdevs": 4, 00:23:43.755 "num_base_bdevs_discovered": 4, 00:23:43.755 "num_base_bdevs_operational": 4, 00:23:43.755 "process": { 00:23:43.755 "type": "rebuild", 00:23:43.755 "target": "spare", 00:23:43.755 "progress": { 00:23:43.755 "blocks": 19200, 00:23:43.755 "percent": 9 00:23:43.755 } 00:23:43.755 }, 00:23:43.755 "base_bdevs_list": [ 00:23:43.755 { 00:23:43.755 "name": "spare", 00:23:43.755 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:43.755 "is_configured": true, 00:23:43.755 "data_offset": 0, 00:23:43.755 "data_size": 65536 00:23:43.755 }, 00:23:43.755 { 00:23:43.755 "name": "BaseBdev2", 00:23:43.755 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:43.755 "is_configured": true, 00:23:43.755 "data_offset": 0, 00:23:43.755 "data_size": 65536 00:23:43.755 }, 00:23:43.755 { 00:23:43.755 "name": "BaseBdev3", 00:23:43.755 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:43.755 "is_configured": true, 00:23:43.755 "data_offset": 0, 00:23:43.755 "data_size": 65536 00:23:43.755 }, 00:23:43.755 { 00:23:43.755 "name": "BaseBdev4", 00:23:43.755 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:43.755 "is_configured": true, 00:23:43.755 "data_offset": 0, 00:23:43.755 "data_size": 65536 00:23:43.755 } 00:23:43.755 ] 00:23:43.755 }' 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=490 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.755 05:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.756 05:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.756 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.756 05:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.756 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.756 "name": "raid_bdev1", 00:23:43.756 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:43.756 "strip_size_kb": 64, 00:23:43.756 "state": "online", 00:23:43.756 "raid_level": "raid5f", 00:23:43.756 "superblock": false, 00:23:43.756 "num_base_bdevs": 4, 00:23:43.756 "num_base_bdevs_discovered": 4, 00:23:43.756 "num_base_bdevs_operational": 4, 00:23:43.756 "process": { 00:23:43.756 "type": "rebuild", 00:23:43.756 "target": "spare", 00:23:43.756 "progress": { 00:23:43.756 "blocks": 19200, 00:23:43.756 "percent": 9 00:23:43.756 } 00:23:43.756 }, 00:23:43.756 "base_bdevs_list": [ 00:23:43.756 { 00:23:43.756 "name": "spare", 00:23:43.756 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:43.756 "is_configured": true, 00:23:43.756 "data_offset": 0, 00:23:43.756 "data_size": 65536 00:23:43.756 }, 00:23:43.756 { 00:23:43.756 "name": "BaseBdev2", 00:23:43.756 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:43.756 "is_configured": true, 00:23:43.756 "data_offset": 0, 00:23:43.756 "data_size": 65536 00:23:43.756 }, 00:23:43.756 { 00:23:43.756 "name": "BaseBdev3", 00:23:43.756 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:43.756 "is_configured": true, 00:23:43.756 "data_offset": 0, 00:23:43.756 "data_size": 65536 00:23:43.756 }, 00:23:43.756 { 00:23:43.756 "name": "BaseBdev4", 00:23:43.756 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:43.756 "is_configured": true, 00:23:43.756 "data_offset": 0, 00:23:43.756 "data_size": 65536 00:23:43.756 } 00:23:43.756 ] 00:23:43.756 }' 00:23:43.756 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.756 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.756 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.756 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.756 05:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:44.708 "name": "raid_bdev1", 00:23:44.708 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:44.708 "strip_size_kb": 64, 00:23:44.708 "state": "online", 00:23:44.708 "raid_level": "raid5f", 00:23:44.708 "superblock": false, 00:23:44.708 "num_base_bdevs": 4, 00:23:44.708 "num_base_bdevs_discovered": 4, 00:23:44.708 "num_base_bdevs_operational": 4, 00:23:44.708 "process": { 00:23:44.708 "type": "rebuild", 00:23:44.708 "target": "spare", 00:23:44.708 "progress": { 00:23:44.708 "blocks": 40320, 00:23:44.708 "percent": 20 00:23:44.708 } 00:23:44.708 }, 00:23:44.708 "base_bdevs_list": [ 00:23:44.708 { 00:23:44.708 "name": "spare", 00:23:44.708 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:44.708 "is_configured": true, 00:23:44.708 "data_offset": 0, 00:23:44.708 "data_size": 65536 00:23:44.708 }, 00:23:44.708 { 00:23:44.708 "name": "BaseBdev2", 00:23:44.708 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:44.708 "is_configured": true, 00:23:44.708 "data_offset": 0, 00:23:44.708 "data_size": 65536 00:23:44.708 }, 00:23:44.708 { 00:23:44.708 "name": "BaseBdev3", 00:23:44.708 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:44.708 "is_configured": true, 00:23:44.708 "data_offset": 0, 00:23:44.708 "data_size": 65536 00:23:44.708 }, 00:23:44.708 { 00:23:44.708 "name": "BaseBdev4", 00:23:44.708 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:44.708 "is_configured": true, 00:23:44.708 "data_offset": 0, 00:23:44.708 "data_size": 65536 00:23:44.708 } 00:23:44.708 ] 00:23:44.708 }' 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.708 05:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.643 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:45.643 "name": "raid_bdev1", 00:23:45.643 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:45.643 "strip_size_kb": 64, 00:23:45.643 "state": "online", 00:23:45.643 "raid_level": "raid5f", 00:23:45.643 "superblock": false, 00:23:45.643 "num_base_bdevs": 4, 00:23:45.643 "num_base_bdevs_discovered": 4, 00:23:45.643 "num_base_bdevs_operational": 4, 00:23:45.643 "process": { 00:23:45.643 "type": "rebuild", 00:23:45.643 "target": "spare", 00:23:45.643 "progress": { 00:23:45.643 "blocks": 61440, 00:23:45.643 "percent": 31 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 "base_bdevs_list": [ 00:23:45.643 { 00:23:45.643 "name": "spare", 00:23:45.643 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:45.643 "is_configured": true, 00:23:45.643 "data_offset": 0, 00:23:45.643 "data_size": 65536 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "name": "BaseBdev2", 00:23:45.643 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:45.643 "is_configured": true, 00:23:45.643 "data_offset": 0, 00:23:45.643 "data_size": 65536 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "name": "BaseBdev3", 00:23:45.643 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:45.643 "is_configured": true, 00:23:45.643 "data_offset": 0, 00:23:45.643 "data_size": 65536 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "name": "BaseBdev4", 00:23:45.643 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:45.643 "is_configured": true, 00:23:45.643 "data_offset": 0, 00:23:45.643 "data_size": 65536 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }' 00:23:45.644 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:45.902 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.902 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:45.902 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.902 05:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:46.835 "name": "raid_bdev1", 00:23:46.835 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:46.835 "strip_size_kb": 64, 00:23:46.835 "state": "online", 00:23:46.835 "raid_level": "raid5f", 00:23:46.835 "superblock": false, 00:23:46.835 "num_base_bdevs": 4, 00:23:46.835 "num_base_bdevs_discovered": 4, 00:23:46.835 "num_base_bdevs_operational": 4, 00:23:46.835 "process": { 00:23:46.835 "type": "rebuild", 00:23:46.835 "target": "spare", 00:23:46.835 "progress": { 00:23:46.835 "blocks": 82560, 00:23:46.835 "percent": 41 00:23:46.835 } 00:23:46.835 }, 00:23:46.835 "base_bdevs_list": [ 00:23:46.835 { 00:23:46.835 "name": "spare", 00:23:46.835 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:46.835 "is_configured": true, 00:23:46.835 "data_offset": 0, 00:23:46.835 "data_size": 65536 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "name": "BaseBdev2", 00:23:46.835 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:46.835 "is_configured": true, 00:23:46.835 "data_offset": 0, 00:23:46.835 "data_size": 65536 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "name": "BaseBdev3", 00:23:46.835 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:46.835 "is_configured": true, 00:23:46.835 "data_offset": 0, 00:23:46.835 "data_size": 65536 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "name": "BaseBdev4", 00:23:46.835 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:46.835 "is_configured": true, 00:23:46.835 "data_offset": 0, 00:23:46.835 "data_size": 65536 00:23:46.835 } 00:23:46.835 ] 00:23:46.835 }' 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:46.835 05:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.208 "name": "raid_bdev1", 00:23:48.208 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:48.208 "strip_size_kb": 64, 00:23:48.208 "state": "online", 00:23:48.208 "raid_level": "raid5f", 00:23:48.208 "superblock": false, 00:23:48.208 "num_base_bdevs": 4, 00:23:48.208 "num_base_bdevs_discovered": 4, 00:23:48.208 "num_base_bdevs_operational": 4, 00:23:48.208 "process": { 00:23:48.208 "type": "rebuild", 00:23:48.208 "target": "spare", 00:23:48.208 "progress": { 00:23:48.208 "blocks": 103680, 00:23:48.208 "percent": 52 00:23:48.208 } 00:23:48.208 }, 00:23:48.208 "base_bdevs_list": [ 00:23:48.208 { 00:23:48.208 "name": "spare", 00:23:48.208 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:48.208 "is_configured": true, 00:23:48.208 "data_offset": 0, 00:23:48.208 "data_size": 65536 00:23:48.208 }, 00:23:48.208 { 00:23:48.208 "name": "BaseBdev2", 00:23:48.208 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:48.208 "is_configured": true, 00:23:48.208 "data_offset": 0, 00:23:48.208 "data_size": 65536 00:23:48.208 }, 00:23:48.208 { 00:23:48.208 "name": "BaseBdev3", 00:23:48.208 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:48.208 "is_configured": true, 00:23:48.208 "data_offset": 0, 00:23:48.208 "data_size": 65536 00:23:48.208 }, 00:23:48.208 { 00:23:48.208 "name": "BaseBdev4", 00:23:48.208 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:48.208 "is_configured": true, 00:23:48.208 "data_offset": 0, 00:23:48.208 "data_size": 65536 00:23:48.208 } 00:23:48.208 ] 00:23:48.208 }' 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.208 05:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.142 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:49.142 "name": "raid_bdev1", 00:23:49.142 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:49.142 "strip_size_kb": 64, 00:23:49.142 "state": "online", 00:23:49.142 "raid_level": "raid5f", 00:23:49.142 "superblock": false, 00:23:49.142 "num_base_bdevs": 4, 00:23:49.142 "num_base_bdevs_discovered": 4, 00:23:49.142 "num_base_bdevs_operational": 4, 00:23:49.142 "process": { 00:23:49.142 "type": "rebuild", 00:23:49.142 "target": "spare", 00:23:49.142 "progress": { 00:23:49.142 "blocks": 124800, 00:23:49.142 "percent": 63 00:23:49.142 } 00:23:49.142 }, 00:23:49.142 "base_bdevs_list": [ 00:23:49.142 { 00:23:49.142 "name": "spare", 00:23:49.142 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:49.142 "is_configured": true, 00:23:49.142 "data_offset": 0, 00:23:49.142 "data_size": 65536 00:23:49.142 }, 00:23:49.142 { 00:23:49.142 "name": "BaseBdev2", 00:23:49.142 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:49.142 "is_configured": true, 00:23:49.143 "data_offset": 0, 00:23:49.143 "data_size": 65536 00:23:49.143 }, 00:23:49.143 { 00:23:49.143 "name": "BaseBdev3", 00:23:49.143 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:49.143 "is_configured": true, 00:23:49.143 "data_offset": 0, 00:23:49.143 "data_size": 65536 00:23:49.143 }, 00:23:49.143 { 00:23:49.143 "name": "BaseBdev4", 00:23:49.143 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:49.143 "is_configured": true, 00:23:49.143 "data_offset": 0, 00:23:49.143 "data_size": 65536 00:23:49.143 } 00:23:49.143 ] 00:23:49.143 }' 00:23:49.143 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:49.143 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.143 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:49.143 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.143 05:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:50.077 "name": "raid_bdev1", 00:23:50.077 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:50.077 "strip_size_kb": 64, 00:23:50.077 "state": "online", 00:23:50.077 "raid_level": "raid5f", 00:23:50.077 "superblock": false, 00:23:50.077 "num_base_bdevs": 4, 00:23:50.077 "num_base_bdevs_discovered": 4, 00:23:50.077 "num_base_bdevs_operational": 4, 00:23:50.077 "process": { 00:23:50.077 "type": "rebuild", 00:23:50.077 "target": "spare", 00:23:50.077 "progress": { 00:23:50.077 "blocks": 145920, 00:23:50.077 "percent": 74 00:23:50.077 } 00:23:50.077 }, 00:23:50.077 "base_bdevs_list": [ 00:23:50.077 { 00:23:50.077 "name": "spare", 00:23:50.077 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:50.077 "is_configured": true, 00:23:50.077 "data_offset": 0, 00:23:50.077 "data_size": 65536 00:23:50.077 }, 00:23:50.077 { 00:23:50.077 "name": "BaseBdev2", 00:23:50.077 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:50.077 "is_configured": true, 00:23:50.077 "data_offset": 0, 00:23:50.077 "data_size": 65536 00:23:50.077 }, 00:23:50.077 { 00:23:50.077 "name": "BaseBdev3", 00:23:50.077 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:50.077 "is_configured": true, 00:23:50.077 "data_offset": 0, 00:23:50.077 "data_size": 65536 00:23:50.077 }, 00:23:50.077 { 00:23:50.077 "name": "BaseBdev4", 00:23:50.077 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:50.077 "is_configured": true, 00:23:50.077 "data_offset": 0, 00:23:50.077 "data_size": 65536 00:23:50.077 } 00:23:50.077 ] 00:23:50.077 }' 00:23:50.077 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:50.336 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.336 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:50.336 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.336 05:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.269 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.269 "name": "raid_bdev1", 00:23:51.269 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:51.269 "strip_size_kb": 64, 00:23:51.269 "state": "online", 00:23:51.269 "raid_level": "raid5f", 00:23:51.269 "superblock": false, 00:23:51.270 "num_base_bdevs": 4, 00:23:51.270 "num_base_bdevs_discovered": 4, 00:23:51.270 "num_base_bdevs_operational": 4, 00:23:51.270 "process": { 00:23:51.270 "type": "rebuild", 00:23:51.270 "target": "spare", 00:23:51.270 "progress": { 00:23:51.270 "blocks": 167040, 00:23:51.270 "percent": 84 00:23:51.270 } 00:23:51.270 }, 00:23:51.270 "base_bdevs_list": [ 00:23:51.270 { 00:23:51.270 "name": "spare", 00:23:51.270 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:51.270 "is_configured": true, 00:23:51.270 "data_offset": 0, 00:23:51.270 "data_size": 65536 00:23:51.270 }, 00:23:51.270 { 00:23:51.270 "name": "BaseBdev2", 00:23:51.270 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:51.270 "is_configured": true, 00:23:51.270 "data_offset": 0, 00:23:51.270 "data_size": 65536 00:23:51.270 }, 00:23:51.270 { 00:23:51.270 "name": "BaseBdev3", 00:23:51.270 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:51.270 "is_configured": true, 00:23:51.270 "data_offset": 0, 00:23:51.270 "data_size": 65536 00:23:51.270 }, 00:23:51.270 { 00:23:51.270 "name": "BaseBdev4", 00:23:51.270 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:51.270 "is_configured": true, 00:23:51.270 "data_offset": 0, 00:23:51.270 "data_size": 65536 00:23:51.270 } 00:23:51.270 ] 00:23:51.270 }' 00:23:51.270 05:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.270 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.270 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.270 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.270 05:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:52.642 "name": "raid_bdev1", 00:23:52.642 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:52.642 "strip_size_kb": 64, 00:23:52.642 "state": "online", 00:23:52.642 "raid_level": "raid5f", 00:23:52.642 "superblock": false, 00:23:52.642 "num_base_bdevs": 4, 00:23:52.642 "num_base_bdevs_discovered": 4, 00:23:52.642 "num_base_bdevs_operational": 4, 00:23:52.642 "process": { 00:23:52.642 "type": "rebuild", 00:23:52.642 "target": "spare", 00:23:52.642 "progress": { 00:23:52.642 "blocks": 188160, 00:23:52.642 "percent": 95 00:23:52.642 } 00:23:52.642 }, 00:23:52.642 "base_bdevs_list": [ 00:23:52.642 { 00:23:52.642 "name": "spare", 00:23:52.642 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:52.642 "is_configured": true, 00:23:52.642 "data_offset": 0, 00:23:52.642 "data_size": 65536 00:23:52.642 }, 00:23:52.642 { 00:23:52.642 "name": "BaseBdev2", 00:23:52.642 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:52.642 "is_configured": true, 00:23:52.642 "data_offset": 0, 00:23:52.642 "data_size": 65536 00:23:52.642 }, 00:23:52.642 { 00:23:52.642 "name": "BaseBdev3", 00:23:52.642 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:52.642 "is_configured": true, 00:23:52.642 "data_offset": 0, 00:23:52.642 "data_size": 65536 00:23:52.642 }, 00:23:52.642 { 00:23:52.642 "name": "BaseBdev4", 00:23:52.642 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:52.642 "is_configured": true, 00:23:52.642 "data_offset": 0, 00:23:52.642 "data_size": 65536 00:23:52.642 } 00:23:52.642 ] 00:23:52.642 }' 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.642 05:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:52.900 [2024-11-20 05:34:24.540134] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:52.900 [2024-11-20 05:34:24.540203] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:52.900 [2024-11-20 05:34:24.540251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:53.466 "name": "raid_bdev1", 00:23:53.466 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:53.466 "strip_size_kb": 64, 00:23:53.466 "state": "online", 00:23:53.466 "raid_level": "raid5f", 00:23:53.466 "superblock": false, 00:23:53.466 "num_base_bdevs": 4, 00:23:53.466 "num_base_bdevs_discovered": 4, 00:23:53.466 "num_base_bdevs_operational": 4, 00:23:53.466 "base_bdevs_list": [ 00:23:53.466 { 00:23:53.466 "name": "spare", 00:23:53.466 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:53.466 "is_configured": true, 00:23:53.466 "data_offset": 0, 00:23:53.466 "data_size": 65536 00:23:53.466 }, 00:23:53.466 { 00:23:53.466 "name": "BaseBdev2", 00:23:53.466 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:53.466 "is_configured": true, 00:23:53.466 "data_offset": 0, 00:23:53.466 "data_size": 65536 00:23:53.466 }, 00:23:53.466 { 00:23:53.466 "name": "BaseBdev3", 00:23:53.466 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:53.466 "is_configured": true, 00:23:53.466 "data_offset": 0, 00:23:53.466 "data_size": 65536 00:23:53.466 }, 00:23:53.466 { 00:23:53.466 "name": "BaseBdev4", 00:23:53.466 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:53.466 "is_configured": true, 00:23:53.466 "data_offset": 0, 00:23:53.466 "data_size": 65536 00:23:53.466 } 00:23:53.466 ] 00:23:53.466 }' 00:23:53.466 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:53.467 "name": "raid_bdev1", 00:23:53.467 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:53.467 "strip_size_kb": 64, 00:23:53.467 "state": "online", 00:23:53.467 "raid_level": "raid5f", 00:23:53.467 "superblock": false, 00:23:53.467 "num_base_bdevs": 4, 00:23:53.467 "num_base_bdevs_discovered": 4, 00:23:53.467 "num_base_bdevs_operational": 4, 00:23:53.467 "base_bdevs_list": [ 00:23:53.467 { 00:23:53.467 "name": "spare", 00:23:53.467 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:53.467 "is_configured": true, 00:23:53.467 "data_offset": 0, 00:23:53.467 "data_size": 65536 00:23:53.467 }, 00:23:53.467 { 00:23:53.467 "name": "BaseBdev2", 00:23:53.467 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:53.467 "is_configured": true, 00:23:53.467 "data_offset": 0, 00:23:53.467 "data_size": 65536 00:23:53.467 }, 00:23:53.467 { 00:23:53.467 "name": "BaseBdev3", 00:23:53.467 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:53.467 "is_configured": true, 00:23:53.467 "data_offset": 0, 00:23:53.467 "data_size": 65536 00:23:53.467 }, 00:23:53.467 { 00:23:53.467 "name": "BaseBdev4", 00:23:53.467 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:53.467 "is_configured": true, 00:23:53.467 "data_offset": 0, 00:23:53.467 "data_size": 65536 00:23:53.467 } 00:23:53.467 ] 00:23:53.467 }' 00:23:53.467 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.725 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.725 "name": "raid_bdev1", 00:23:53.725 "uuid": "99d4e494-2e8b-4eb2-be13-22159dc9d7db", 00:23:53.725 "strip_size_kb": 64, 00:23:53.725 "state": "online", 00:23:53.725 "raid_level": "raid5f", 00:23:53.725 "superblock": false, 00:23:53.725 "num_base_bdevs": 4, 00:23:53.725 "num_base_bdevs_discovered": 4, 00:23:53.725 "num_base_bdevs_operational": 4, 00:23:53.725 "base_bdevs_list": [ 00:23:53.725 { 00:23:53.725 "name": "spare", 00:23:53.725 "uuid": "c2898d7a-c01d-5e26-9add-2bd8214b5370", 00:23:53.725 "is_configured": true, 00:23:53.725 "data_offset": 0, 00:23:53.725 "data_size": 65536 00:23:53.725 }, 00:23:53.725 { 00:23:53.725 "name": "BaseBdev2", 00:23:53.725 "uuid": "207c3131-b55c-5777-a9b4-b31efff0619a", 00:23:53.725 "is_configured": true, 00:23:53.725 "data_offset": 0, 00:23:53.725 "data_size": 65536 00:23:53.725 }, 00:23:53.725 { 00:23:53.725 "name": "BaseBdev3", 00:23:53.725 "uuid": "1aa205a4-6db1-5119-b99e-050ed1cad635", 00:23:53.725 "is_configured": true, 00:23:53.725 "data_offset": 0, 00:23:53.725 "data_size": 65536 00:23:53.726 }, 00:23:53.726 { 00:23:53.726 "name": "BaseBdev4", 00:23:53.726 "uuid": "32a87bd9-9ba7-5b73-83dd-af5370e73b07", 00:23:53.726 "is_configured": true, 00:23:53.726 "data_offset": 0, 00:23:53.726 "data_size": 65536 00:23:53.726 } 00:23:53.726 ] 00:23:53.726 }' 00:23:53.726 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.726 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.984 [2024-11-20 05:34:25.693174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:53.984 [2024-11-20 05:34:25.693208] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:53.984 [2024-11-20 05:34:25.693277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.984 [2024-11-20 05:34:25.693355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.984 [2024-11-20 05:34:25.693375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:53.984 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:54.312 /dev/nbd0 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:54.312 1+0 records in 00:23:54.312 1+0 records out 00:23:54.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262014 s, 15.6 MB/s 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:54.312 05:34:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:54.571 /dev/nbd1 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:54.571 1+0 records in 00:23:54.571 1+0 records out 00:23:54.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028785 s, 14.2 MB/s 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:54.571 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:54.830 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:54.830 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:54.830 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:54.830 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:54.830 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:54.830 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:54.830 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:54.831 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:54.831 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:54.831 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82233 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 82233 ']' 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 82233 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82233 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:55.090 killing process with pid 82233 00:23:55.090 Received shutdown signal, test time was about 60.000000 seconds 00:23:55.090 00:23:55.090 Latency(us) 00:23:55.090 [2024-11-20T05:34:26.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.090 [2024-11-20T05:34:26.925Z] =================================================================================================================== 00:23:55.090 [2024-11-20T05:34:26.925Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82233' 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 82233 00:23:55.090 [2024-11-20 05:34:26.787431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:55.090 05:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 82233 00:23:55.347 [2024-11-20 05:34:27.031390] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.913 ************************************ 00:23:55.913 END TEST raid5f_rebuild_test 00:23:55.913 ************************************ 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:23:55.913 00:23:55.913 real 0m17.783s 00:23:55.913 user 0m20.873s 00:23:55.913 sys 0m1.760s 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.913 05:34:27 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:23:55.913 05:34:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:55.913 05:34:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:55.913 05:34:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.913 ************************************ 00:23:55.913 START TEST raid5f_rebuild_test_sb 00:23:55.913 ************************************ 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:55.913 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82733 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82733 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82733 ']' 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:55.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:55.914 05:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.914 [2024-11-20 05:34:27.715408] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:23:55.914 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:55.914 Zero copy mechanism will not be used. 00:23:55.914 [2024-11-20 05:34:27.716047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82733 ] 00:23:56.172 [2024-11-20 05:34:27.875776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.172 [2024-11-20 05:34:27.994901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.429 [2024-11-20 05:34:28.142172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.429 [2024-11-20 05:34:28.142416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.996 BaseBdev1_malloc 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.996 [2024-11-20 05:34:28.607556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:56.996 [2024-11-20 05:34:28.607621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.996 [2024-11-20 05:34:28.607644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:56.996 [2024-11-20 05:34:28.607657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.996 [2024-11-20 05:34:28.609901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.996 [2024-11-20 05:34:28.610049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:56.996 BaseBdev1 00:23:56.996 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 BaseBdev2_malloc 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 [2024-11-20 05:34:28.647838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:56.997 [2024-11-20 05:34:28.647909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.997 [2024-11-20 05:34:28.647928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:56.997 [2024-11-20 05:34:28.647940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.997 [2024-11-20 05:34:28.650083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.997 [2024-11-20 05:34:28.650122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:56.997 BaseBdev2 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 BaseBdev3_malloc 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 [2024-11-20 05:34:28.697800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:56.997 [2024-11-20 05:34:28.697866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.997 [2024-11-20 05:34:28.697889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:56.997 [2024-11-20 05:34:28.697900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.997 [2024-11-20 05:34:28.700010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.997 [2024-11-20 05:34:28.700050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:56.997 BaseBdev3 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 BaseBdev4_malloc 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 [2024-11-20 05:34:28.734070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:56.997 [2024-11-20 05:34:28.734127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.997 [2024-11-20 05:34:28.734144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:56.997 [2024-11-20 05:34:28.734155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.997 [2024-11-20 05:34:28.736253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.997 [2024-11-20 05:34:28.736291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:56.997 BaseBdev4 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 spare_malloc 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 spare_delay 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 [2024-11-20 05:34:28.778060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:56.997 [2024-11-20 05:34:28.778116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.997 [2024-11-20 05:34:28.778134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:56.997 [2024-11-20 05:34:28.778145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.997 [2024-11-20 05:34:28.780288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.997 [2024-11-20 05:34:28.780324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:56.997 spare 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 [2024-11-20 05:34:28.786114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:56.997 [2024-11-20 05:34:28.788108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:56.997 [2024-11-20 05:34:28.788248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:56.997 [2024-11-20 05:34:28.788324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:56.997 [2024-11-20 05:34:28.788574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:56.997 [2024-11-20 05:34:28.788618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:56.997 [2024-11-20 05:34:28.788926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:56.997 [2024-11-20 05:34:28.793900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:56.997 [2024-11-20 05:34:28.793988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:56.997 [2024-11-20 05:34:28.794225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.997 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.255 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.255 "name": "raid_bdev1", 00:23:57.255 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:23:57.255 "strip_size_kb": 64, 00:23:57.255 "state": "online", 00:23:57.255 "raid_level": "raid5f", 00:23:57.255 "superblock": true, 00:23:57.255 "num_base_bdevs": 4, 00:23:57.255 "num_base_bdevs_discovered": 4, 00:23:57.255 "num_base_bdevs_operational": 4, 00:23:57.255 "base_bdevs_list": [ 00:23:57.255 { 00:23:57.255 "name": "BaseBdev1", 00:23:57.255 "uuid": "fed01f48-503e-5ca8-ba8f-4c62430853c9", 00:23:57.255 "is_configured": true, 00:23:57.255 "data_offset": 2048, 00:23:57.255 "data_size": 63488 00:23:57.255 }, 00:23:57.255 { 00:23:57.255 "name": "BaseBdev2", 00:23:57.255 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:23:57.255 "is_configured": true, 00:23:57.255 "data_offset": 2048, 00:23:57.255 "data_size": 63488 00:23:57.255 }, 00:23:57.255 { 00:23:57.255 "name": "BaseBdev3", 00:23:57.255 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:23:57.255 "is_configured": true, 00:23:57.255 "data_offset": 2048, 00:23:57.255 "data_size": 63488 00:23:57.255 }, 00:23:57.255 { 00:23:57.255 "name": "BaseBdev4", 00:23:57.255 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:23:57.255 "is_configured": true, 00:23:57.255 "data_offset": 2048, 00:23:57.255 "data_size": 63488 00:23:57.255 } 00:23:57.255 ] 00:23:57.255 }' 00:23:57.255 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.255 05:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.515 [2024-11-20 05:34:29.119954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.515 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:57.773 [2024-11-20 05:34:29.379837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:57.773 /dev/nbd0 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:57.773 1+0 records in 00:23:57.773 1+0 records out 00:23:57.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236612 s, 17.3 MB/s 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:57.773 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:23:58.338 496+0 records in 00:23:58.338 496+0 records out 00:23:58.338 97517568 bytes (98 MB, 93 MiB) copied, 0.508163 s, 192 MB/s 00:23:58.338 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:58.338 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:58.338 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:58.338 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:58.338 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:58.338 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:58.338 05:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:58.338 [2024-11-20 05:34:30.158532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.338 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.596 [2024-11-20 05:34:30.172045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.596 "name": "raid_bdev1", 00:23:58.596 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:23:58.596 "strip_size_kb": 64, 00:23:58.596 "state": "online", 00:23:58.596 "raid_level": "raid5f", 00:23:58.596 "superblock": true, 00:23:58.596 "num_base_bdevs": 4, 00:23:58.596 "num_base_bdevs_discovered": 3, 00:23:58.596 "num_base_bdevs_operational": 3, 00:23:58.596 "base_bdevs_list": [ 00:23:58.596 { 00:23:58.596 "name": null, 00:23:58.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.596 "is_configured": false, 00:23:58.596 "data_offset": 0, 00:23:58.596 "data_size": 63488 00:23:58.596 }, 00:23:58.596 { 00:23:58.596 "name": "BaseBdev2", 00:23:58.596 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:23:58.596 "is_configured": true, 00:23:58.596 "data_offset": 2048, 00:23:58.596 "data_size": 63488 00:23:58.596 }, 00:23:58.596 { 00:23:58.596 "name": "BaseBdev3", 00:23:58.596 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:23:58.596 "is_configured": true, 00:23:58.596 "data_offset": 2048, 00:23:58.596 "data_size": 63488 00:23:58.596 }, 00:23:58.596 { 00:23:58.596 "name": "BaseBdev4", 00:23:58.596 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:23:58.596 "is_configured": true, 00:23:58.596 "data_offset": 2048, 00:23:58.596 "data_size": 63488 00:23:58.596 } 00:23:58.596 ] 00:23:58.596 }' 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.596 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.854 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:58.854 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.854 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.854 [2024-11-20 05:34:30.484112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:58.854 [2024-11-20 05:34:30.494300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:23:58.854 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.854 05:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:58.854 [2024-11-20 05:34:30.501005] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.788 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:59.788 "name": "raid_bdev1", 00:23:59.788 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:23:59.788 "strip_size_kb": 64, 00:23:59.788 "state": "online", 00:23:59.788 "raid_level": "raid5f", 00:23:59.788 "superblock": true, 00:23:59.788 "num_base_bdevs": 4, 00:23:59.788 "num_base_bdevs_discovered": 4, 00:23:59.788 "num_base_bdevs_operational": 4, 00:23:59.788 "process": { 00:23:59.788 "type": "rebuild", 00:23:59.788 "target": "spare", 00:23:59.788 "progress": { 00:23:59.788 "blocks": 19200, 00:23:59.788 "percent": 10 00:23:59.788 } 00:23:59.788 }, 00:23:59.788 "base_bdevs_list": [ 00:23:59.788 { 00:23:59.788 "name": "spare", 00:23:59.788 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:23:59.788 "is_configured": true, 00:23:59.789 "data_offset": 2048, 00:23:59.789 "data_size": 63488 00:23:59.789 }, 00:23:59.789 { 00:23:59.789 "name": "BaseBdev2", 00:23:59.789 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:23:59.789 "is_configured": true, 00:23:59.789 "data_offset": 2048, 00:23:59.789 "data_size": 63488 00:23:59.789 }, 00:23:59.789 { 00:23:59.789 "name": "BaseBdev3", 00:23:59.789 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:23:59.789 "is_configured": true, 00:23:59.789 "data_offset": 2048, 00:23:59.789 "data_size": 63488 00:23:59.789 }, 00:23:59.789 { 00:23:59.789 "name": "BaseBdev4", 00:23:59.789 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:23:59.789 "is_configured": true, 00:23:59.789 "data_offset": 2048, 00:23:59.789 "data_size": 63488 00:23:59.789 } 00:23:59.789 ] 00:23:59.789 }' 00:23:59.789 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:59.789 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:59.789 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:59.789 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:59.789 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:59.789 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.789 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.789 [2024-11-20 05:34:31.597982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:59.789 [2024-11-20 05:34:31.609554] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:59.789 [2024-11-20 05:34:31.609629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.789 [2024-11-20 05:34:31.609647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:59.789 [2024-11-20 05:34:31.609656] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.047 "name": "raid_bdev1", 00:24:00.047 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:00.047 "strip_size_kb": 64, 00:24:00.047 "state": "online", 00:24:00.047 "raid_level": "raid5f", 00:24:00.047 "superblock": true, 00:24:00.047 "num_base_bdevs": 4, 00:24:00.047 "num_base_bdevs_discovered": 3, 00:24:00.047 "num_base_bdevs_operational": 3, 00:24:00.047 "base_bdevs_list": [ 00:24:00.047 { 00:24:00.047 "name": null, 00:24:00.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.047 "is_configured": false, 00:24:00.047 "data_offset": 0, 00:24:00.047 "data_size": 63488 00:24:00.047 }, 00:24:00.047 { 00:24:00.047 "name": "BaseBdev2", 00:24:00.047 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:00.047 "is_configured": true, 00:24:00.047 "data_offset": 2048, 00:24:00.047 "data_size": 63488 00:24:00.047 }, 00:24:00.047 { 00:24:00.047 "name": "BaseBdev3", 00:24:00.047 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:00.047 "is_configured": true, 00:24:00.047 "data_offset": 2048, 00:24:00.047 "data_size": 63488 00:24:00.047 }, 00:24:00.047 { 00:24:00.047 "name": "BaseBdev4", 00:24:00.047 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:00.047 "is_configured": true, 00:24:00.047 "data_offset": 2048, 00:24:00.047 "data_size": 63488 00:24:00.047 } 00:24:00.047 ] 00:24:00.047 }' 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.047 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:00.306 "name": "raid_bdev1", 00:24:00.306 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:00.306 "strip_size_kb": 64, 00:24:00.306 "state": "online", 00:24:00.306 "raid_level": "raid5f", 00:24:00.306 "superblock": true, 00:24:00.306 "num_base_bdevs": 4, 00:24:00.306 "num_base_bdevs_discovered": 3, 00:24:00.306 "num_base_bdevs_operational": 3, 00:24:00.306 "base_bdevs_list": [ 00:24:00.306 { 00:24:00.306 "name": null, 00:24:00.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.306 "is_configured": false, 00:24:00.306 "data_offset": 0, 00:24:00.306 "data_size": 63488 00:24:00.306 }, 00:24:00.306 { 00:24:00.306 "name": "BaseBdev2", 00:24:00.306 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:00.306 "is_configured": true, 00:24:00.306 "data_offset": 2048, 00:24:00.306 "data_size": 63488 00:24:00.306 }, 00:24:00.306 { 00:24:00.306 "name": "BaseBdev3", 00:24:00.306 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:00.306 "is_configured": true, 00:24:00.306 "data_offset": 2048, 00:24:00.306 "data_size": 63488 00:24:00.306 }, 00:24:00.306 { 00:24:00.306 "name": "BaseBdev4", 00:24:00.306 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:00.306 "is_configured": true, 00:24:00.306 "data_offset": 2048, 00:24:00.306 "data_size": 63488 00:24:00.306 } 00:24:00.306 ] 00:24:00.306 }' 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:00.306 05:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:00.306 05:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:00.306 05:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:00.306 05:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.306 05:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.306 [2024-11-20 05:34:32.024847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:00.306 [2024-11-20 05:34:32.034613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:24:00.306 05:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.306 05:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:00.306 [2024-11-20 05:34:32.041105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.239 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.497 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:01.497 "name": "raid_bdev1", 00:24:01.497 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:01.497 "strip_size_kb": 64, 00:24:01.497 "state": "online", 00:24:01.497 "raid_level": "raid5f", 00:24:01.497 "superblock": true, 00:24:01.497 "num_base_bdevs": 4, 00:24:01.497 "num_base_bdevs_discovered": 4, 00:24:01.497 "num_base_bdevs_operational": 4, 00:24:01.497 "process": { 00:24:01.497 "type": "rebuild", 00:24:01.497 "target": "spare", 00:24:01.497 "progress": { 00:24:01.497 "blocks": 19200, 00:24:01.497 "percent": 10 00:24:01.497 } 00:24:01.497 }, 00:24:01.497 "base_bdevs_list": [ 00:24:01.497 { 00:24:01.497 "name": "spare", 00:24:01.497 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:01.497 "is_configured": true, 00:24:01.497 "data_offset": 2048, 00:24:01.497 "data_size": 63488 00:24:01.497 }, 00:24:01.497 { 00:24:01.497 "name": "BaseBdev2", 00:24:01.497 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:01.497 "is_configured": true, 00:24:01.497 "data_offset": 2048, 00:24:01.497 "data_size": 63488 00:24:01.497 }, 00:24:01.497 { 00:24:01.497 "name": "BaseBdev3", 00:24:01.497 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:01.497 "is_configured": true, 00:24:01.497 "data_offset": 2048, 00:24:01.497 "data_size": 63488 00:24:01.497 }, 00:24:01.497 { 00:24:01.497 "name": "BaseBdev4", 00:24:01.497 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:01.497 "is_configured": true, 00:24:01.497 "data_offset": 2048, 00:24:01.497 "data_size": 63488 00:24:01.497 } 00:24:01.497 ] 00:24:01.497 }' 00:24:01.497 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:01.497 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.497 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:01.497 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:01.498 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=508 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:01.498 "name": "raid_bdev1", 00:24:01.498 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:01.498 "strip_size_kb": 64, 00:24:01.498 "state": "online", 00:24:01.498 "raid_level": "raid5f", 00:24:01.498 "superblock": true, 00:24:01.498 "num_base_bdevs": 4, 00:24:01.498 "num_base_bdevs_discovered": 4, 00:24:01.498 "num_base_bdevs_operational": 4, 00:24:01.498 "process": { 00:24:01.498 "type": "rebuild", 00:24:01.498 "target": "spare", 00:24:01.498 "progress": { 00:24:01.498 "blocks": 21120, 00:24:01.498 "percent": 11 00:24:01.498 } 00:24:01.498 }, 00:24:01.498 "base_bdevs_list": [ 00:24:01.498 { 00:24:01.498 "name": "spare", 00:24:01.498 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:01.498 "is_configured": true, 00:24:01.498 "data_offset": 2048, 00:24:01.498 "data_size": 63488 00:24:01.498 }, 00:24:01.498 { 00:24:01.498 "name": "BaseBdev2", 00:24:01.498 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:01.498 "is_configured": true, 00:24:01.498 "data_offset": 2048, 00:24:01.498 "data_size": 63488 00:24:01.498 }, 00:24:01.498 { 00:24:01.498 "name": "BaseBdev3", 00:24:01.498 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:01.498 "is_configured": true, 00:24:01.498 "data_offset": 2048, 00:24:01.498 "data_size": 63488 00:24:01.498 }, 00:24:01.498 { 00:24:01.498 "name": "BaseBdev4", 00:24:01.498 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:01.498 "is_configured": true, 00:24:01.498 "data_offset": 2048, 00:24:01.498 "data_size": 63488 00:24:01.498 } 00:24:01.498 ] 00:24:01.498 }' 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.498 05:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.431 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:02.691 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.691 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:02.691 "name": "raid_bdev1", 00:24:02.691 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:02.691 "strip_size_kb": 64, 00:24:02.691 "state": "online", 00:24:02.691 "raid_level": "raid5f", 00:24:02.691 "superblock": true, 00:24:02.691 "num_base_bdevs": 4, 00:24:02.691 "num_base_bdevs_discovered": 4, 00:24:02.691 "num_base_bdevs_operational": 4, 00:24:02.691 "process": { 00:24:02.691 "type": "rebuild", 00:24:02.691 "target": "spare", 00:24:02.691 "progress": { 00:24:02.691 "blocks": 42240, 00:24:02.691 "percent": 22 00:24:02.691 } 00:24:02.691 }, 00:24:02.691 "base_bdevs_list": [ 00:24:02.691 { 00:24:02.691 "name": "spare", 00:24:02.691 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:02.691 "is_configured": true, 00:24:02.691 "data_offset": 2048, 00:24:02.691 "data_size": 63488 00:24:02.691 }, 00:24:02.691 { 00:24:02.691 "name": "BaseBdev2", 00:24:02.691 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:02.691 "is_configured": true, 00:24:02.691 "data_offset": 2048, 00:24:02.691 "data_size": 63488 00:24:02.691 }, 00:24:02.691 { 00:24:02.691 "name": "BaseBdev3", 00:24:02.691 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:02.691 "is_configured": true, 00:24:02.691 "data_offset": 2048, 00:24:02.691 "data_size": 63488 00:24:02.691 }, 00:24:02.691 { 00:24:02.691 "name": "BaseBdev4", 00:24:02.691 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:02.691 "is_configured": true, 00:24:02.691 "data_offset": 2048, 00:24:02.691 "data_size": 63488 00:24:02.691 } 00:24:02.691 ] 00:24:02.691 }' 00:24:02.691 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:02.691 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.691 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:02.691 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.691 05:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.721 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:03.721 "name": "raid_bdev1", 00:24:03.721 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:03.721 "strip_size_kb": 64, 00:24:03.721 "state": "online", 00:24:03.721 "raid_level": "raid5f", 00:24:03.721 "superblock": true, 00:24:03.721 "num_base_bdevs": 4, 00:24:03.721 "num_base_bdevs_discovered": 4, 00:24:03.722 "num_base_bdevs_operational": 4, 00:24:03.722 "process": { 00:24:03.722 "type": "rebuild", 00:24:03.722 "target": "spare", 00:24:03.722 "progress": { 00:24:03.722 "blocks": 61440, 00:24:03.722 "percent": 32 00:24:03.722 } 00:24:03.722 }, 00:24:03.722 "base_bdevs_list": [ 00:24:03.722 { 00:24:03.722 "name": "spare", 00:24:03.722 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:03.722 "is_configured": true, 00:24:03.722 "data_offset": 2048, 00:24:03.722 "data_size": 63488 00:24:03.722 }, 00:24:03.722 { 00:24:03.722 "name": "BaseBdev2", 00:24:03.722 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:03.722 "is_configured": true, 00:24:03.722 "data_offset": 2048, 00:24:03.722 "data_size": 63488 00:24:03.722 }, 00:24:03.722 { 00:24:03.722 "name": "BaseBdev3", 00:24:03.722 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:03.722 "is_configured": true, 00:24:03.722 "data_offset": 2048, 00:24:03.722 "data_size": 63488 00:24:03.722 }, 00:24:03.722 { 00:24:03.722 "name": "BaseBdev4", 00:24:03.722 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:03.722 "is_configured": true, 00:24:03.722 "data_offset": 2048, 00:24:03.722 "data_size": 63488 00:24:03.722 } 00:24:03.722 ] 00:24:03.722 }' 00:24:03.722 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:03.722 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.722 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:03.722 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.722 05:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.665 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.927 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.927 "name": "raid_bdev1", 00:24:04.927 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:04.927 "strip_size_kb": 64, 00:24:04.927 "state": "online", 00:24:04.927 "raid_level": "raid5f", 00:24:04.927 "superblock": true, 00:24:04.927 "num_base_bdevs": 4, 00:24:04.927 "num_base_bdevs_discovered": 4, 00:24:04.927 "num_base_bdevs_operational": 4, 00:24:04.927 "process": { 00:24:04.927 "type": "rebuild", 00:24:04.927 "target": "spare", 00:24:04.927 "progress": { 00:24:04.927 "blocks": 82560, 00:24:04.927 "percent": 43 00:24:04.927 } 00:24:04.927 }, 00:24:04.927 "base_bdevs_list": [ 00:24:04.927 { 00:24:04.927 "name": "spare", 00:24:04.927 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:04.927 "is_configured": true, 00:24:04.927 "data_offset": 2048, 00:24:04.927 "data_size": 63488 00:24:04.927 }, 00:24:04.927 { 00:24:04.927 "name": "BaseBdev2", 00:24:04.927 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:04.927 "is_configured": true, 00:24:04.927 "data_offset": 2048, 00:24:04.927 "data_size": 63488 00:24:04.927 }, 00:24:04.927 { 00:24:04.927 "name": "BaseBdev3", 00:24:04.927 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:04.927 "is_configured": true, 00:24:04.927 "data_offset": 2048, 00:24:04.927 "data_size": 63488 00:24:04.927 }, 00:24:04.927 { 00:24:04.927 "name": "BaseBdev4", 00:24:04.927 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:04.927 "is_configured": true, 00:24:04.927 "data_offset": 2048, 00:24:04.927 "data_size": 63488 00:24:04.927 } 00:24:04.927 ] 00:24:04.927 }' 00:24:04.927 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.927 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.927 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.927 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.927 05:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:05.862 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:05.862 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.862 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.862 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.863 "name": "raid_bdev1", 00:24:05.863 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:05.863 "strip_size_kb": 64, 00:24:05.863 "state": "online", 00:24:05.863 "raid_level": "raid5f", 00:24:05.863 "superblock": true, 00:24:05.863 "num_base_bdevs": 4, 00:24:05.863 "num_base_bdevs_discovered": 4, 00:24:05.863 "num_base_bdevs_operational": 4, 00:24:05.863 "process": { 00:24:05.863 "type": "rebuild", 00:24:05.863 "target": "spare", 00:24:05.863 "progress": { 00:24:05.863 "blocks": 105600, 00:24:05.863 "percent": 55 00:24:05.863 } 00:24:05.863 }, 00:24:05.863 "base_bdevs_list": [ 00:24:05.863 { 00:24:05.863 "name": "spare", 00:24:05.863 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:05.863 "is_configured": true, 00:24:05.863 "data_offset": 2048, 00:24:05.863 "data_size": 63488 00:24:05.863 }, 00:24:05.863 { 00:24:05.863 "name": "BaseBdev2", 00:24:05.863 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:05.863 "is_configured": true, 00:24:05.863 "data_offset": 2048, 00:24:05.863 "data_size": 63488 00:24:05.863 }, 00:24:05.863 { 00:24:05.863 "name": "BaseBdev3", 00:24:05.863 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:05.863 "is_configured": true, 00:24:05.863 "data_offset": 2048, 00:24:05.863 "data_size": 63488 00:24:05.863 }, 00:24:05.863 { 00:24:05.863 "name": "BaseBdev4", 00:24:05.863 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:05.863 "is_configured": true, 00:24:05.863 "data_offset": 2048, 00:24:05.863 "data_size": 63488 00:24:05.863 } 00:24:05.863 ] 00:24:05.863 }' 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.863 05:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.234 "name": "raid_bdev1", 00:24:07.234 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:07.234 "strip_size_kb": 64, 00:24:07.234 "state": "online", 00:24:07.234 "raid_level": "raid5f", 00:24:07.234 "superblock": true, 00:24:07.234 "num_base_bdevs": 4, 00:24:07.234 "num_base_bdevs_discovered": 4, 00:24:07.234 "num_base_bdevs_operational": 4, 00:24:07.234 "process": { 00:24:07.234 "type": "rebuild", 00:24:07.234 "target": "spare", 00:24:07.234 "progress": { 00:24:07.234 "blocks": 126720, 00:24:07.234 "percent": 66 00:24:07.234 } 00:24:07.234 }, 00:24:07.234 "base_bdevs_list": [ 00:24:07.234 { 00:24:07.234 "name": "spare", 00:24:07.234 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:07.234 "is_configured": true, 00:24:07.234 "data_offset": 2048, 00:24:07.234 "data_size": 63488 00:24:07.234 }, 00:24:07.234 { 00:24:07.234 "name": "BaseBdev2", 00:24:07.234 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:07.234 "is_configured": true, 00:24:07.234 "data_offset": 2048, 00:24:07.234 "data_size": 63488 00:24:07.234 }, 00:24:07.234 { 00:24:07.234 "name": "BaseBdev3", 00:24:07.234 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:07.234 "is_configured": true, 00:24:07.234 "data_offset": 2048, 00:24:07.234 "data_size": 63488 00:24:07.234 }, 00:24:07.234 { 00:24:07.234 "name": "BaseBdev4", 00:24:07.234 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:07.234 "is_configured": true, 00:24:07.234 "data_offset": 2048, 00:24:07.234 "data_size": 63488 00:24:07.234 } 00:24:07.234 ] 00:24:07.234 }' 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.234 05:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.167 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.167 "name": "raid_bdev1", 00:24:08.168 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:08.168 "strip_size_kb": 64, 00:24:08.168 "state": "online", 00:24:08.168 "raid_level": "raid5f", 00:24:08.168 "superblock": true, 00:24:08.168 "num_base_bdevs": 4, 00:24:08.168 "num_base_bdevs_discovered": 4, 00:24:08.168 "num_base_bdevs_operational": 4, 00:24:08.168 "process": { 00:24:08.168 "type": "rebuild", 00:24:08.168 "target": "spare", 00:24:08.168 "progress": { 00:24:08.168 "blocks": 147840, 00:24:08.168 "percent": 77 00:24:08.168 } 00:24:08.168 }, 00:24:08.168 "base_bdevs_list": [ 00:24:08.168 { 00:24:08.168 "name": "spare", 00:24:08.168 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:08.168 "is_configured": true, 00:24:08.168 "data_offset": 2048, 00:24:08.168 "data_size": 63488 00:24:08.168 }, 00:24:08.168 { 00:24:08.168 "name": "BaseBdev2", 00:24:08.168 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:08.168 "is_configured": true, 00:24:08.168 "data_offset": 2048, 00:24:08.168 "data_size": 63488 00:24:08.168 }, 00:24:08.168 { 00:24:08.168 "name": "BaseBdev3", 00:24:08.168 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:08.168 "is_configured": true, 00:24:08.168 "data_offset": 2048, 00:24:08.168 "data_size": 63488 00:24:08.168 }, 00:24:08.168 { 00:24:08.168 "name": "BaseBdev4", 00:24:08.168 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:08.168 "is_configured": true, 00:24:08.168 "data_offset": 2048, 00:24:08.168 "data_size": 63488 00:24:08.168 } 00:24:08.168 ] 00:24:08.168 }' 00:24:08.168 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.168 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.168 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.168 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.168 05:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:09.101 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:09.101 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.101 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.101 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:09.102 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:09.102 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.102 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.102 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.102 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.102 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.102 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.102 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.102 "name": "raid_bdev1", 00:24:09.102 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:09.102 "strip_size_kb": 64, 00:24:09.102 "state": "online", 00:24:09.102 "raid_level": "raid5f", 00:24:09.102 "superblock": true, 00:24:09.102 "num_base_bdevs": 4, 00:24:09.102 "num_base_bdevs_discovered": 4, 00:24:09.102 "num_base_bdevs_operational": 4, 00:24:09.102 "process": { 00:24:09.102 "type": "rebuild", 00:24:09.102 "target": "spare", 00:24:09.102 "progress": { 00:24:09.102 "blocks": 167040, 00:24:09.102 "percent": 87 00:24:09.102 } 00:24:09.102 }, 00:24:09.102 "base_bdevs_list": [ 00:24:09.102 { 00:24:09.102 "name": "spare", 00:24:09.102 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:09.102 "is_configured": true, 00:24:09.102 "data_offset": 2048, 00:24:09.102 "data_size": 63488 00:24:09.102 }, 00:24:09.102 { 00:24:09.102 "name": "BaseBdev2", 00:24:09.102 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:09.102 "is_configured": true, 00:24:09.102 "data_offset": 2048, 00:24:09.102 "data_size": 63488 00:24:09.102 }, 00:24:09.102 { 00:24:09.102 "name": "BaseBdev3", 00:24:09.102 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:09.102 "is_configured": true, 00:24:09.102 "data_offset": 2048, 00:24:09.102 "data_size": 63488 00:24:09.102 }, 00:24:09.102 { 00:24:09.102 "name": "BaseBdev4", 00:24:09.102 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:09.102 "is_configured": true, 00:24:09.102 "data_offset": 2048, 00:24:09.102 "data_size": 63488 00:24:09.102 } 00:24:09.102 ] 00:24:09.102 }' 00:24:09.102 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.359 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.359 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.359 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.359 05:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:10.292 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:10.292 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.292 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.292 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:10.292 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:10.292 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.292 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.292 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.292 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.293 05:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.293 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.293 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.293 "name": "raid_bdev1", 00:24:10.293 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:10.293 "strip_size_kb": 64, 00:24:10.293 "state": "online", 00:24:10.293 "raid_level": "raid5f", 00:24:10.293 "superblock": true, 00:24:10.293 "num_base_bdevs": 4, 00:24:10.293 "num_base_bdevs_discovered": 4, 00:24:10.293 "num_base_bdevs_operational": 4, 00:24:10.293 "process": { 00:24:10.293 "type": "rebuild", 00:24:10.293 "target": "spare", 00:24:10.293 "progress": { 00:24:10.293 "blocks": 188160, 00:24:10.293 "percent": 98 00:24:10.293 } 00:24:10.293 }, 00:24:10.293 "base_bdevs_list": [ 00:24:10.293 { 00:24:10.293 "name": "spare", 00:24:10.293 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:10.293 "is_configured": true, 00:24:10.293 "data_offset": 2048, 00:24:10.293 "data_size": 63488 00:24:10.293 }, 00:24:10.293 { 00:24:10.293 "name": "BaseBdev2", 00:24:10.293 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:10.293 "is_configured": true, 00:24:10.293 "data_offset": 2048, 00:24:10.293 "data_size": 63488 00:24:10.293 }, 00:24:10.293 { 00:24:10.293 "name": "BaseBdev3", 00:24:10.293 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:10.293 "is_configured": true, 00:24:10.293 "data_offset": 2048, 00:24:10.293 "data_size": 63488 00:24:10.293 }, 00:24:10.293 { 00:24:10.293 "name": "BaseBdev4", 00:24:10.293 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:10.293 "is_configured": true, 00:24:10.293 "data_offset": 2048, 00:24:10.293 "data_size": 63488 00:24:10.293 } 00:24:10.293 ] 00:24:10.293 }' 00:24:10.293 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.293 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.293 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:10.293 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.293 05:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:10.293 [2024-11-20 05:34:42.107433] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:10.293 [2024-11-20 05:34:42.107631] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:10.293 [2024-11-20 05:34:42.107758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.745 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.746 "name": "raid_bdev1", 00:24:11.746 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:11.746 "strip_size_kb": 64, 00:24:11.746 "state": "online", 00:24:11.746 "raid_level": "raid5f", 00:24:11.746 "superblock": true, 00:24:11.746 "num_base_bdevs": 4, 00:24:11.746 "num_base_bdevs_discovered": 4, 00:24:11.746 "num_base_bdevs_operational": 4, 00:24:11.746 "base_bdevs_list": [ 00:24:11.746 { 00:24:11.746 "name": "spare", 00:24:11.746 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 }, 00:24:11.746 { 00:24:11.746 "name": "BaseBdev2", 00:24:11.746 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 }, 00:24:11.746 { 00:24:11.746 "name": "BaseBdev3", 00:24:11.746 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 }, 00:24:11.746 { 00:24:11.746 "name": "BaseBdev4", 00:24:11.746 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 } 00:24:11.746 ] 00:24:11.746 }' 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.746 "name": "raid_bdev1", 00:24:11.746 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:11.746 "strip_size_kb": 64, 00:24:11.746 "state": "online", 00:24:11.746 "raid_level": "raid5f", 00:24:11.746 "superblock": true, 00:24:11.746 "num_base_bdevs": 4, 00:24:11.746 "num_base_bdevs_discovered": 4, 00:24:11.746 "num_base_bdevs_operational": 4, 00:24:11.746 "base_bdevs_list": [ 00:24:11.746 { 00:24:11.746 "name": "spare", 00:24:11.746 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 }, 00:24:11.746 { 00:24:11.746 "name": "BaseBdev2", 00:24:11.746 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 }, 00:24:11.746 { 00:24:11.746 "name": "BaseBdev3", 00:24:11.746 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 }, 00:24:11.746 { 00:24:11.746 "name": "BaseBdev4", 00:24:11.746 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 } 00:24:11.746 ] 00:24:11.746 }' 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:11.746 "name": "raid_bdev1", 00:24:11.746 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:11.746 "strip_size_kb": 64, 00:24:11.746 "state": "online", 00:24:11.746 "raid_level": "raid5f", 00:24:11.746 "superblock": true, 00:24:11.746 "num_base_bdevs": 4, 00:24:11.746 "num_base_bdevs_discovered": 4, 00:24:11.746 "num_base_bdevs_operational": 4, 00:24:11.746 "base_bdevs_list": [ 00:24:11.746 { 00:24:11.746 "name": "spare", 00:24:11.746 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 }, 00:24:11.746 { 00:24:11.746 "name": "BaseBdev2", 00:24:11.746 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 }, 00:24:11.746 { 00:24:11.746 "name": "BaseBdev3", 00:24:11.746 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 }, 00:24:11.746 { 00:24:11.746 "name": "BaseBdev4", 00:24:11.746 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:11.746 "is_configured": true, 00:24:11.746 "data_offset": 2048, 00:24:11.746 "data_size": 63488 00:24:11.746 } 00:24:11.746 ] 00:24:11.746 }' 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:11.746 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.006 [2024-11-20 05:34:43.616433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:12.006 [2024-11-20 05:34:43.616463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:12.006 [2024-11-20 05:34:43.616540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:12.006 [2024-11-20 05:34:43.616625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:12.006 [2024-11-20 05:34:43.616636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:12.006 /dev/nbd0 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:12.006 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.268 1+0 records in 00:24:12.268 1+0 records out 00:24:12.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196105 s, 20.9 MB/s 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:12.268 05:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:12.268 /dev/nbd1 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.268 1+0 records in 00:24:12.268 1+0 records out 00:24:12.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691949 s, 5.9 MB/s 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:12.268 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:12.530 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:12.530 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:12.530 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:12.530 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:12.530 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:12.530 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:12.530 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:12.791 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:13.053 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:13.053 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:13.053 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:13.053 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:13.053 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:13.053 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:13.053 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:13.053 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 [2024-11-20 05:34:44.648986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:13.054 [2024-11-20 05:34:44.649036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.054 [2024-11-20 05:34:44.649054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:13.054 [2024-11-20 05:34:44.649061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.054 [2024-11-20 05:34:44.650842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.054 [2024-11-20 05:34:44.650873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:13.054 [2024-11-20 05:34:44.650943] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:13.054 [2024-11-20 05:34:44.650982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:13.054 [2024-11-20 05:34:44.651085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:13.054 [2024-11-20 05:34:44.651161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:13.054 [2024-11-20 05:34:44.651225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:13.054 spare 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 [2024-11-20 05:34:44.751304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:13.054 [2024-11-20 05:34:44.751355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:13.054 [2024-11-20 05:34:44.751630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:24:13.054 [2024-11-20 05:34:44.755278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:13.054 [2024-11-20 05:34:44.755302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:13.054 [2024-11-20 05:34:44.755479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.054 "name": "raid_bdev1", 00:24:13.054 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:13.054 "strip_size_kb": 64, 00:24:13.054 "state": "online", 00:24:13.054 "raid_level": "raid5f", 00:24:13.054 "superblock": true, 00:24:13.054 "num_base_bdevs": 4, 00:24:13.054 "num_base_bdevs_discovered": 4, 00:24:13.054 "num_base_bdevs_operational": 4, 00:24:13.054 "base_bdevs_list": [ 00:24:13.054 { 00:24:13.054 "name": "spare", 00:24:13.054 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:13.054 "is_configured": true, 00:24:13.054 "data_offset": 2048, 00:24:13.054 "data_size": 63488 00:24:13.054 }, 00:24:13.054 { 00:24:13.054 "name": "BaseBdev2", 00:24:13.054 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:13.054 "is_configured": true, 00:24:13.054 "data_offset": 2048, 00:24:13.054 "data_size": 63488 00:24:13.054 }, 00:24:13.054 { 00:24:13.054 "name": "BaseBdev3", 00:24:13.054 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:13.054 "is_configured": true, 00:24:13.054 "data_offset": 2048, 00:24:13.054 "data_size": 63488 00:24:13.054 }, 00:24:13.054 { 00:24:13.054 "name": "BaseBdev4", 00:24:13.054 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:13.054 "is_configured": true, 00:24:13.054 "data_offset": 2048, 00:24:13.054 "data_size": 63488 00:24:13.054 } 00:24:13.054 ] 00:24:13.054 }' 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.054 05:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:13.316 "name": "raid_bdev1", 00:24:13.316 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:13.316 "strip_size_kb": 64, 00:24:13.316 "state": "online", 00:24:13.316 "raid_level": "raid5f", 00:24:13.316 "superblock": true, 00:24:13.316 "num_base_bdevs": 4, 00:24:13.316 "num_base_bdevs_discovered": 4, 00:24:13.316 "num_base_bdevs_operational": 4, 00:24:13.316 "base_bdevs_list": [ 00:24:13.316 { 00:24:13.316 "name": "spare", 00:24:13.316 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:13.316 "is_configured": true, 00:24:13.316 "data_offset": 2048, 00:24:13.316 "data_size": 63488 00:24:13.316 }, 00:24:13.316 { 00:24:13.316 "name": "BaseBdev2", 00:24:13.316 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:13.316 "is_configured": true, 00:24:13.316 "data_offset": 2048, 00:24:13.316 "data_size": 63488 00:24:13.316 }, 00:24:13.316 { 00:24:13.316 "name": "BaseBdev3", 00:24:13.316 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:13.316 "is_configured": true, 00:24:13.316 "data_offset": 2048, 00:24:13.316 "data_size": 63488 00:24:13.316 }, 00:24:13.316 { 00:24:13.316 "name": "BaseBdev4", 00:24:13.316 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:13.316 "is_configured": true, 00:24:13.316 "data_offset": 2048, 00:24:13.316 "data_size": 63488 00:24:13.316 } 00:24:13.316 ] 00:24:13.316 }' 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:13.316 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:13.578 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:13.578 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.578 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.578 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.579 [2024-11-20 05:34:45.195914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.579 "name": "raid_bdev1", 00:24:13.579 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:13.579 "strip_size_kb": 64, 00:24:13.579 "state": "online", 00:24:13.579 "raid_level": "raid5f", 00:24:13.579 "superblock": true, 00:24:13.579 "num_base_bdevs": 4, 00:24:13.579 "num_base_bdevs_discovered": 3, 00:24:13.579 "num_base_bdevs_operational": 3, 00:24:13.579 "base_bdevs_list": [ 00:24:13.579 { 00:24:13.579 "name": null, 00:24:13.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.579 "is_configured": false, 00:24:13.579 "data_offset": 0, 00:24:13.579 "data_size": 63488 00:24:13.579 }, 00:24:13.579 { 00:24:13.579 "name": "BaseBdev2", 00:24:13.579 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:13.579 "is_configured": true, 00:24:13.579 "data_offset": 2048, 00:24:13.579 "data_size": 63488 00:24:13.579 }, 00:24:13.579 { 00:24:13.579 "name": "BaseBdev3", 00:24:13.579 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:13.579 "is_configured": true, 00:24:13.579 "data_offset": 2048, 00:24:13.579 "data_size": 63488 00:24:13.579 }, 00:24:13.579 { 00:24:13.579 "name": "BaseBdev4", 00:24:13.579 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:13.579 "is_configured": true, 00:24:13.579 "data_offset": 2048, 00:24:13.579 "data_size": 63488 00:24:13.579 } 00:24:13.579 ] 00:24:13.579 }' 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.579 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.840 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:13.840 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.840 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.840 [2024-11-20 05:34:45.519976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:13.840 [2024-11-20 05:34:45.520121] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:13.840 [2024-11-20 05:34:45.520136] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:13.840 [2024-11-20 05:34:45.520166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:13.840 [2024-11-20 05:34:45.527696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:24:13.840 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.840 05:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:13.840 [2024-11-20 05:34:45.532954] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:14.784 "name": "raid_bdev1", 00:24:14.784 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:14.784 "strip_size_kb": 64, 00:24:14.784 "state": "online", 00:24:14.784 "raid_level": "raid5f", 00:24:14.784 "superblock": true, 00:24:14.784 "num_base_bdevs": 4, 00:24:14.784 "num_base_bdevs_discovered": 4, 00:24:14.784 "num_base_bdevs_operational": 4, 00:24:14.784 "process": { 00:24:14.784 "type": "rebuild", 00:24:14.784 "target": "spare", 00:24:14.784 "progress": { 00:24:14.784 "blocks": 19200, 00:24:14.784 "percent": 10 00:24:14.784 } 00:24:14.784 }, 00:24:14.784 "base_bdevs_list": [ 00:24:14.784 { 00:24:14.784 "name": "spare", 00:24:14.784 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:14.784 "is_configured": true, 00:24:14.784 "data_offset": 2048, 00:24:14.784 "data_size": 63488 00:24:14.784 }, 00:24:14.784 { 00:24:14.784 "name": "BaseBdev2", 00:24:14.784 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:14.784 "is_configured": true, 00:24:14.784 "data_offset": 2048, 00:24:14.784 "data_size": 63488 00:24:14.784 }, 00:24:14.784 { 00:24:14.784 "name": "BaseBdev3", 00:24:14.784 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:14.784 "is_configured": true, 00:24:14.784 "data_offset": 2048, 00:24:14.784 "data_size": 63488 00:24:14.784 }, 00:24:14.784 { 00:24:14.784 "name": "BaseBdev4", 00:24:14.784 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:14.784 "is_configured": true, 00:24:14.784 "data_offset": 2048, 00:24:14.784 "data_size": 63488 00:24:14.784 } 00:24:14.784 ] 00:24:14.784 }' 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.784 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.043 [2024-11-20 05:34:46.633835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:15.043 [2024-11-20 05:34:46.640289] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:15.043 [2024-11-20 05:34:46.640338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.043 [2024-11-20 05:34:46.640352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:15.043 [2024-11-20 05:34:46.640359] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.043 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:15.043 "name": "raid_bdev1", 00:24:15.043 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:15.043 "strip_size_kb": 64, 00:24:15.043 "state": "online", 00:24:15.043 "raid_level": "raid5f", 00:24:15.043 "superblock": true, 00:24:15.043 "num_base_bdevs": 4, 00:24:15.043 "num_base_bdevs_discovered": 3, 00:24:15.043 "num_base_bdevs_operational": 3, 00:24:15.044 "base_bdevs_list": [ 00:24:15.044 { 00:24:15.044 "name": null, 00:24:15.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.044 "is_configured": false, 00:24:15.044 "data_offset": 0, 00:24:15.044 "data_size": 63488 00:24:15.044 }, 00:24:15.044 { 00:24:15.044 "name": "BaseBdev2", 00:24:15.044 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:15.044 "is_configured": true, 00:24:15.044 "data_offset": 2048, 00:24:15.044 "data_size": 63488 00:24:15.044 }, 00:24:15.044 { 00:24:15.044 "name": "BaseBdev3", 00:24:15.044 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:15.044 "is_configured": true, 00:24:15.044 "data_offset": 2048, 00:24:15.044 "data_size": 63488 00:24:15.044 }, 00:24:15.044 { 00:24:15.044 "name": "BaseBdev4", 00:24:15.044 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:15.044 "is_configured": true, 00:24:15.044 "data_offset": 2048, 00:24:15.044 "data_size": 63488 00:24:15.044 } 00:24:15.044 ] 00:24:15.044 }' 00:24:15.044 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:15.044 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.302 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:15.302 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.302 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.302 [2024-11-20 05:34:46.980648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:15.302 [2024-11-20 05:34:46.980699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.302 [2024-11-20 05:34:46.980720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:15.302 [2024-11-20 05:34:46.980730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.302 [2024-11-20 05:34:46.981111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.302 [2024-11-20 05:34:46.981138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:15.302 [2024-11-20 05:34:46.981209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:15.302 [2024-11-20 05:34:46.981220] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:15.302 [2024-11-20 05:34:46.981229] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:15.302 [2024-11-20 05:34:46.981248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.302 [2024-11-20 05:34:46.988696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:24:15.302 spare 00:24:15.302 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.302 05:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:15.302 [2024-11-20 05:34:46.993846] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:16.238 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.238 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.238 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:16.238 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:16.238 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.238 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.238 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.238 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.238 05:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.238 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.238 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.238 "name": "raid_bdev1", 00:24:16.238 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:16.238 "strip_size_kb": 64, 00:24:16.238 "state": "online", 00:24:16.238 "raid_level": "raid5f", 00:24:16.238 "superblock": true, 00:24:16.238 "num_base_bdevs": 4, 00:24:16.238 "num_base_bdevs_discovered": 4, 00:24:16.238 "num_base_bdevs_operational": 4, 00:24:16.238 "process": { 00:24:16.238 "type": "rebuild", 00:24:16.238 "target": "spare", 00:24:16.238 "progress": { 00:24:16.238 "blocks": 19200, 00:24:16.238 "percent": 10 00:24:16.238 } 00:24:16.238 }, 00:24:16.238 "base_bdevs_list": [ 00:24:16.238 { 00:24:16.238 "name": "spare", 00:24:16.238 "uuid": "1f2b38f4-5625-5d04-8086-57ece210b825", 00:24:16.238 "is_configured": true, 00:24:16.238 "data_offset": 2048, 00:24:16.238 "data_size": 63488 00:24:16.238 }, 00:24:16.238 { 00:24:16.238 "name": "BaseBdev2", 00:24:16.238 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:16.238 "is_configured": true, 00:24:16.238 "data_offset": 2048, 00:24:16.238 "data_size": 63488 00:24:16.238 }, 00:24:16.238 { 00:24:16.238 "name": "BaseBdev3", 00:24:16.238 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:16.238 "is_configured": true, 00:24:16.238 "data_offset": 2048, 00:24:16.238 "data_size": 63488 00:24:16.238 }, 00:24:16.238 { 00:24:16.238 "name": "BaseBdev4", 00:24:16.238 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:16.238 "is_configured": true, 00:24:16.238 "data_offset": 2048, 00:24:16.238 "data_size": 63488 00:24:16.238 } 00:24:16.238 ] 00:24:16.238 }' 00:24:16.238 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.495 [2024-11-20 05:34:48.102717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.495 [2024-11-20 05:34:48.201985] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:16.495 [2024-11-20 05:34:48.202047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.495 [2024-11-20 05:34:48.202062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.495 [2024-11-20 05:34:48.202068] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:16.495 "name": "raid_bdev1", 00:24:16.495 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:16.495 "strip_size_kb": 64, 00:24:16.495 "state": "online", 00:24:16.495 "raid_level": "raid5f", 00:24:16.495 "superblock": true, 00:24:16.495 "num_base_bdevs": 4, 00:24:16.495 "num_base_bdevs_discovered": 3, 00:24:16.495 "num_base_bdevs_operational": 3, 00:24:16.495 "base_bdevs_list": [ 00:24:16.495 { 00:24:16.495 "name": null, 00:24:16.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.495 "is_configured": false, 00:24:16.495 "data_offset": 0, 00:24:16.495 "data_size": 63488 00:24:16.495 }, 00:24:16.495 { 00:24:16.495 "name": "BaseBdev2", 00:24:16.495 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:16.495 "is_configured": true, 00:24:16.495 "data_offset": 2048, 00:24:16.495 "data_size": 63488 00:24:16.495 }, 00:24:16.495 { 00:24:16.495 "name": "BaseBdev3", 00:24:16.495 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:16.495 "is_configured": true, 00:24:16.495 "data_offset": 2048, 00:24:16.495 "data_size": 63488 00:24:16.495 }, 00:24:16.495 { 00:24:16.495 "name": "BaseBdev4", 00:24:16.495 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:16.495 "is_configured": true, 00:24:16.495 "data_offset": 2048, 00:24:16.495 "data_size": 63488 00:24:16.495 } 00:24:16.495 ] 00:24:16.495 }' 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:16.495 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.755 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.755 "name": "raid_bdev1", 00:24:16.755 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:16.755 "strip_size_kb": 64, 00:24:16.755 "state": "online", 00:24:16.755 "raid_level": "raid5f", 00:24:16.755 "superblock": true, 00:24:16.755 "num_base_bdevs": 4, 00:24:16.755 "num_base_bdevs_discovered": 3, 00:24:16.755 "num_base_bdevs_operational": 3, 00:24:16.755 "base_bdevs_list": [ 00:24:16.755 { 00:24:16.755 "name": null, 00:24:16.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.755 "is_configured": false, 00:24:16.755 "data_offset": 0, 00:24:16.756 "data_size": 63488 00:24:16.756 }, 00:24:16.756 { 00:24:16.756 "name": "BaseBdev2", 00:24:16.756 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:16.756 "is_configured": true, 00:24:16.756 "data_offset": 2048, 00:24:16.756 "data_size": 63488 00:24:16.756 }, 00:24:16.756 { 00:24:16.756 "name": "BaseBdev3", 00:24:16.756 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:16.756 "is_configured": true, 00:24:16.756 "data_offset": 2048, 00:24:16.756 "data_size": 63488 00:24:16.756 }, 00:24:16.756 { 00:24:16.756 "name": "BaseBdev4", 00:24:16.756 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:16.756 "is_configured": true, 00:24:16.756 "data_offset": 2048, 00:24:16.756 "data_size": 63488 00:24:16.756 } 00:24:16.756 ] 00:24:16.756 }' 00:24:16.756 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.014 [2024-11-20 05:34:48.638317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:17.014 [2024-11-20 05:34:48.638360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:17.014 [2024-11-20 05:34:48.638386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:17.014 [2024-11-20 05:34:48.638394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:17.014 [2024-11-20 05:34:48.638746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:17.014 [2024-11-20 05:34:48.638769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:17.014 [2024-11-20 05:34:48.638827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:17.014 [2024-11-20 05:34:48.638838] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:17.014 [2024-11-20 05:34:48.638846] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:17.014 [2024-11-20 05:34:48.638853] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:17.014 BaseBdev1 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.014 05:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.946 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.947 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.947 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:17.947 "name": "raid_bdev1", 00:24:17.947 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:17.947 "strip_size_kb": 64, 00:24:17.947 "state": "online", 00:24:17.947 "raid_level": "raid5f", 00:24:17.947 "superblock": true, 00:24:17.947 "num_base_bdevs": 4, 00:24:17.947 "num_base_bdevs_discovered": 3, 00:24:17.947 "num_base_bdevs_operational": 3, 00:24:17.947 "base_bdevs_list": [ 00:24:17.947 { 00:24:17.947 "name": null, 00:24:17.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.947 "is_configured": false, 00:24:17.947 "data_offset": 0, 00:24:17.947 "data_size": 63488 00:24:17.947 }, 00:24:17.947 { 00:24:17.947 "name": "BaseBdev2", 00:24:17.947 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:17.947 "is_configured": true, 00:24:17.947 "data_offset": 2048, 00:24:17.947 "data_size": 63488 00:24:17.947 }, 00:24:17.947 { 00:24:17.947 "name": "BaseBdev3", 00:24:17.947 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:17.947 "is_configured": true, 00:24:17.947 "data_offset": 2048, 00:24:17.947 "data_size": 63488 00:24:17.947 }, 00:24:17.947 { 00:24:17.947 "name": "BaseBdev4", 00:24:17.947 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:17.947 "is_configured": true, 00:24:17.947 "data_offset": 2048, 00:24:17.947 "data_size": 63488 00:24:17.947 } 00:24:17.947 ] 00:24:17.947 }' 00:24:17.947 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:17.947 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.206 05:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.206 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:18.206 "name": "raid_bdev1", 00:24:18.206 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:18.206 "strip_size_kb": 64, 00:24:18.206 "state": "online", 00:24:18.206 "raid_level": "raid5f", 00:24:18.206 "superblock": true, 00:24:18.206 "num_base_bdevs": 4, 00:24:18.206 "num_base_bdevs_discovered": 3, 00:24:18.206 "num_base_bdevs_operational": 3, 00:24:18.206 "base_bdevs_list": [ 00:24:18.206 { 00:24:18.206 "name": null, 00:24:18.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.206 "is_configured": false, 00:24:18.206 "data_offset": 0, 00:24:18.206 "data_size": 63488 00:24:18.206 }, 00:24:18.206 { 00:24:18.206 "name": "BaseBdev2", 00:24:18.206 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:18.206 "is_configured": true, 00:24:18.206 "data_offset": 2048, 00:24:18.206 "data_size": 63488 00:24:18.206 }, 00:24:18.206 { 00:24:18.206 "name": "BaseBdev3", 00:24:18.206 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:18.206 "is_configured": true, 00:24:18.206 "data_offset": 2048, 00:24:18.206 "data_size": 63488 00:24:18.206 }, 00:24:18.206 { 00:24:18.206 "name": "BaseBdev4", 00:24:18.206 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:18.206 "is_configured": true, 00:24:18.206 "data_offset": 2048, 00:24:18.206 "data_size": 63488 00:24:18.206 } 00:24:18.206 ] 00:24:18.206 }' 00:24:18.206 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:18.206 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:18.206 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.463 [2024-11-20 05:34:50.074623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:18.463 [2024-11-20 05:34:50.074745] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:18.463 [2024-11-20 05:34:50.074763] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:18.463 request: 00:24:18.463 { 00:24:18.463 "base_bdev": "BaseBdev1", 00:24:18.463 "raid_bdev": "raid_bdev1", 00:24:18.463 "method": "bdev_raid_add_base_bdev", 00:24:18.463 "req_id": 1 00:24:18.463 } 00:24:18.463 Got JSON-RPC error response 00:24:18.463 response: 00:24:18.463 { 00:24:18.463 "code": -22, 00:24:18.463 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:18.463 } 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:18.463 05:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.458 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.458 "name": "raid_bdev1", 00:24:19.459 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:19.459 "strip_size_kb": 64, 00:24:19.459 "state": "online", 00:24:19.459 "raid_level": "raid5f", 00:24:19.459 "superblock": true, 00:24:19.459 "num_base_bdevs": 4, 00:24:19.459 "num_base_bdevs_discovered": 3, 00:24:19.459 "num_base_bdevs_operational": 3, 00:24:19.459 "base_bdevs_list": [ 00:24:19.459 { 00:24:19.459 "name": null, 00:24:19.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.459 "is_configured": false, 00:24:19.459 "data_offset": 0, 00:24:19.459 "data_size": 63488 00:24:19.459 }, 00:24:19.459 { 00:24:19.459 "name": "BaseBdev2", 00:24:19.459 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:19.459 "is_configured": true, 00:24:19.459 "data_offset": 2048, 00:24:19.459 "data_size": 63488 00:24:19.459 }, 00:24:19.459 { 00:24:19.459 "name": "BaseBdev3", 00:24:19.459 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:19.459 "is_configured": true, 00:24:19.459 "data_offset": 2048, 00:24:19.459 "data_size": 63488 00:24:19.459 }, 00:24:19.459 { 00:24:19.459 "name": "BaseBdev4", 00:24:19.459 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:19.459 "is_configured": true, 00:24:19.459 "data_offset": 2048, 00:24:19.459 "data_size": 63488 00:24:19.459 } 00:24:19.459 ] 00:24:19.459 }' 00:24:19.459 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.459 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:19.717 "name": "raid_bdev1", 00:24:19.717 "uuid": "123b5e86-30a8-4e36-8d93-df4456a9be4c", 00:24:19.717 "strip_size_kb": 64, 00:24:19.717 "state": "online", 00:24:19.717 "raid_level": "raid5f", 00:24:19.717 "superblock": true, 00:24:19.717 "num_base_bdevs": 4, 00:24:19.717 "num_base_bdevs_discovered": 3, 00:24:19.717 "num_base_bdevs_operational": 3, 00:24:19.717 "base_bdevs_list": [ 00:24:19.717 { 00:24:19.717 "name": null, 00:24:19.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.717 "is_configured": false, 00:24:19.717 "data_offset": 0, 00:24:19.717 "data_size": 63488 00:24:19.717 }, 00:24:19.717 { 00:24:19.717 "name": "BaseBdev2", 00:24:19.717 "uuid": "4e845b04-d1cd-515f-ba83-0e120217db1c", 00:24:19.717 "is_configured": true, 00:24:19.717 "data_offset": 2048, 00:24:19.717 "data_size": 63488 00:24:19.717 }, 00:24:19.717 { 00:24:19.717 "name": "BaseBdev3", 00:24:19.717 "uuid": "e9a01569-e51a-5dce-a2b6-7b865ec3f6cb", 00:24:19.717 "is_configured": true, 00:24:19.717 "data_offset": 2048, 00:24:19.717 "data_size": 63488 00:24:19.717 }, 00:24:19.717 { 00:24:19.717 "name": "BaseBdev4", 00:24:19.717 "uuid": "d0c30e76-2b8d-5a1b-976b-a7d9554df561", 00:24:19.717 "is_configured": true, 00:24:19.717 "data_offset": 2048, 00:24:19.717 "data_size": 63488 00:24:19.717 } 00:24:19.717 ] 00:24:19.717 }' 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82733 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82733 ']' 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82733 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82733 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:19.717 killing process with pid 82733 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82733' 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82733 00:24:19.717 Received shutdown signal, test time was about 60.000000 seconds 00:24:19.717 00:24:19.717 Latency(us) 00:24:19.717 [2024-11-20T05:34:51.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.717 [2024-11-20T05:34:51.552Z] =================================================================================================================== 00:24:19.717 [2024-11-20T05:34:51.552Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:19.717 [2024-11-20 05:34:51.521431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:19.717 05:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82733 00:24:19.717 [2024-11-20 05:34:51.521522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:19.717 [2024-11-20 05:34:51.521582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:19.717 [2024-11-20 05:34:51.521596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:19.979 [2024-11-20 05:34:51.760032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:20.545 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:24:20.545 00:24:20.545 real 0m24.663s 00:24:20.545 user 0m29.749s 00:24:20.545 sys 0m2.428s 00:24:20.545 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:20.545 ************************************ 00:24:20.545 END TEST raid5f_rebuild_test_sb 00:24:20.545 ************************************ 00:24:20.545 05:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.545 05:34:52 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:24:20.545 05:34:52 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:24:20.545 05:34:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:20.545 05:34:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:20.545 05:34:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:20.545 ************************************ 00:24:20.545 START TEST raid_state_function_test_sb_4k 00:24:20.545 ************************************ 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=83535 00:24:20.545 Process raid pid: 83535 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83535' 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 83535 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 83535 ']' 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:20.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:20.545 05:34:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 [2024-11-20 05:34:52.424884] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:20.803 [2024-11-20 05:34:52.425002] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.803 [2024-11-20 05:34:52.584500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.062 [2024-11-20 05:34:52.686459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.062 [2024-11-20 05:34:52.824572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:21.062 [2024-11-20 05:34:52.824610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:21.628 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:21.628 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.629 [2024-11-20 05:34:53.278460] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:21.629 [2024-11-20 05:34:53.278515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:21.629 [2024-11-20 05:34:53.278525] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:21.629 [2024-11-20 05:34:53.278536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.629 "name": "Existed_Raid", 00:24:21.629 "uuid": "b764eb6e-7f48-4742-82b6-f1c1b94fe280", 00:24:21.629 "strip_size_kb": 0, 00:24:21.629 "state": "configuring", 00:24:21.629 "raid_level": "raid1", 00:24:21.629 "superblock": true, 00:24:21.629 "num_base_bdevs": 2, 00:24:21.629 "num_base_bdevs_discovered": 0, 00:24:21.629 "num_base_bdevs_operational": 2, 00:24:21.629 "base_bdevs_list": [ 00:24:21.629 { 00:24:21.629 "name": "BaseBdev1", 00:24:21.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.629 "is_configured": false, 00:24:21.629 "data_offset": 0, 00:24:21.629 "data_size": 0 00:24:21.629 }, 00:24:21.629 { 00:24:21.629 "name": "BaseBdev2", 00:24:21.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.629 "is_configured": false, 00:24:21.629 "data_offset": 0, 00:24:21.629 "data_size": 0 00:24:21.629 } 00:24:21.629 ] 00:24:21.629 }' 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.629 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.888 [2024-11-20 05:34:53.594499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:21.888 [2024-11-20 05:34:53.594546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.888 [2024-11-20 05:34:53.602488] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:21.888 [2024-11-20 05:34:53.602536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:21.888 [2024-11-20 05:34:53.602545] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:21.888 [2024-11-20 05:34:53.602557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.888 [2024-11-20 05:34:53.635247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.888 BaseBdev1 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.888 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.888 [ 00:24:21.888 { 00:24:21.888 "name": "BaseBdev1", 00:24:21.888 "aliases": [ 00:24:21.888 "1bd3528f-ed05-46ad-a7b1-1f471d7542dc" 00:24:21.888 ], 00:24:21.888 "product_name": "Malloc disk", 00:24:21.888 "block_size": 4096, 00:24:21.888 "num_blocks": 8192, 00:24:21.888 "uuid": "1bd3528f-ed05-46ad-a7b1-1f471d7542dc", 00:24:21.888 "assigned_rate_limits": { 00:24:21.888 "rw_ios_per_sec": 0, 00:24:21.888 "rw_mbytes_per_sec": 0, 00:24:21.888 "r_mbytes_per_sec": 0, 00:24:21.888 "w_mbytes_per_sec": 0 00:24:21.888 }, 00:24:21.888 "claimed": true, 00:24:21.888 "claim_type": "exclusive_write", 00:24:21.888 "zoned": false, 00:24:21.888 "supported_io_types": { 00:24:21.888 "read": true, 00:24:21.888 "write": true, 00:24:21.888 "unmap": true, 00:24:21.888 "flush": true, 00:24:21.888 "reset": true, 00:24:21.888 "nvme_admin": false, 00:24:21.889 "nvme_io": false, 00:24:21.889 "nvme_io_md": false, 00:24:21.889 "write_zeroes": true, 00:24:21.889 "zcopy": true, 00:24:21.889 "get_zone_info": false, 00:24:21.889 "zone_management": false, 00:24:21.889 "zone_append": false, 00:24:21.889 "compare": false, 00:24:21.889 "compare_and_write": false, 00:24:21.889 "abort": true, 00:24:21.889 "seek_hole": false, 00:24:21.889 "seek_data": false, 00:24:21.889 "copy": true, 00:24:21.889 "nvme_iov_md": false 00:24:21.889 }, 00:24:21.889 "memory_domains": [ 00:24:21.889 { 00:24:21.889 "dma_device_id": "system", 00:24:21.889 "dma_device_type": 1 00:24:21.889 }, 00:24:21.889 { 00:24:21.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.889 "dma_device_type": 2 00:24:21.889 } 00:24:21.889 ], 00:24:21.889 "driver_specific": {} 00:24:21.889 } 00:24:21.889 ] 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.889 "name": "Existed_Raid", 00:24:21.889 "uuid": "e4ec33fb-eaeb-4b7c-9d2c-6b5065ddade7", 00:24:21.889 "strip_size_kb": 0, 00:24:21.889 "state": "configuring", 00:24:21.889 "raid_level": "raid1", 00:24:21.889 "superblock": true, 00:24:21.889 "num_base_bdevs": 2, 00:24:21.889 "num_base_bdevs_discovered": 1, 00:24:21.889 "num_base_bdevs_operational": 2, 00:24:21.889 "base_bdevs_list": [ 00:24:21.889 { 00:24:21.889 "name": "BaseBdev1", 00:24:21.889 "uuid": "1bd3528f-ed05-46ad-a7b1-1f471d7542dc", 00:24:21.889 "is_configured": true, 00:24:21.889 "data_offset": 256, 00:24:21.889 "data_size": 7936 00:24:21.889 }, 00:24:21.889 { 00:24:21.889 "name": "BaseBdev2", 00:24:21.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.889 "is_configured": false, 00:24:21.889 "data_offset": 0, 00:24:21.889 "data_size": 0 00:24:21.889 } 00:24:21.889 ] 00:24:21.889 }' 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.889 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.148 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:22.148 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.148 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.148 [2024-11-20 05:34:53.971376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:22.148 [2024-11-20 05:34:53.971428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:22.148 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.148 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:22.148 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.148 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.148 [2024-11-20 05:34:53.979413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:22.406 [2024-11-20 05:34:53.981262] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:22.406 [2024-11-20 05:34:53.981304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.406 05:34:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.406 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:22.406 "name": "Existed_Raid", 00:24:22.406 "uuid": "314dc701-8658-4f3a-b43b-70613eef63ad", 00:24:22.406 "strip_size_kb": 0, 00:24:22.406 "state": "configuring", 00:24:22.406 "raid_level": "raid1", 00:24:22.406 "superblock": true, 00:24:22.406 "num_base_bdevs": 2, 00:24:22.406 "num_base_bdevs_discovered": 1, 00:24:22.406 "num_base_bdevs_operational": 2, 00:24:22.406 "base_bdevs_list": [ 00:24:22.406 { 00:24:22.406 "name": "BaseBdev1", 00:24:22.406 "uuid": "1bd3528f-ed05-46ad-a7b1-1f471d7542dc", 00:24:22.406 "is_configured": true, 00:24:22.406 "data_offset": 256, 00:24:22.406 "data_size": 7936 00:24:22.406 }, 00:24:22.406 { 00:24:22.406 "name": "BaseBdev2", 00:24:22.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.406 "is_configured": false, 00:24:22.406 "data_offset": 0, 00:24:22.406 "data_size": 0 00:24:22.406 } 00:24:22.406 ] 00:24:22.406 }' 00:24:22.406 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:22.406 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.665 [2024-11-20 05:34:54.318325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:22.665 [2024-11-20 05:34:54.318562] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:22.665 [2024-11-20 05:34:54.318576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:22.665 [2024-11-20 05:34:54.318828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:22.665 BaseBdev2 00:24:22.665 [2024-11-20 05:34:54.318968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:22.665 [2024-11-20 05:34:54.318979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:22.665 [2024-11-20 05:34:54.319107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.665 [ 00:24:22.665 { 00:24:22.665 "name": "BaseBdev2", 00:24:22.665 "aliases": [ 00:24:22.665 "4a1652ec-8bee-4f01-a341-c09d411cc29c" 00:24:22.665 ], 00:24:22.665 "product_name": "Malloc disk", 00:24:22.665 "block_size": 4096, 00:24:22.665 "num_blocks": 8192, 00:24:22.665 "uuid": "4a1652ec-8bee-4f01-a341-c09d411cc29c", 00:24:22.665 "assigned_rate_limits": { 00:24:22.665 "rw_ios_per_sec": 0, 00:24:22.665 "rw_mbytes_per_sec": 0, 00:24:22.665 "r_mbytes_per_sec": 0, 00:24:22.665 "w_mbytes_per_sec": 0 00:24:22.665 }, 00:24:22.665 "claimed": true, 00:24:22.665 "claim_type": "exclusive_write", 00:24:22.665 "zoned": false, 00:24:22.665 "supported_io_types": { 00:24:22.665 "read": true, 00:24:22.665 "write": true, 00:24:22.665 "unmap": true, 00:24:22.665 "flush": true, 00:24:22.665 "reset": true, 00:24:22.665 "nvme_admin": false, 00:24:22.665 "nvme_io": false, 00:24:22.665 "nvme_io_md": false, 00:24:22.665 "write_zeroes": true, 00:24:22.665 "zcopy": true, 00:24:22.665 "get_zone_info": false, 00:24:22.665 "zone_management": false, 00:24:22.665 "zone_append": false, 00:24:22.665 "compare": false, 00:24:22.665 "compare_and_write": false, 00:24:22.665 "abort": true, 00:24:22.665 "seek_hole": false, 00:24:22.665 "seek_data": false, 00:24:22.665 "copy": true, 00:24:22.665 "nvme_iov_md": false 00:24:22.665 }, 00:24:22.665 "memory_domains": [ 00:24:22.665 { 00:24:22.665 "dma_device_id": "system", 00:24:22.665 "dma_device_type": 1 00:24:22.665 }, 00:24:22.665 { 00:24:22.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.665 "dma_device_type": 2 00:24:22.665 } 00:24:22.665 ], 00:24:22.665 "driver_specific": {} 00:24:22.665 } 00:24:22.665 ] 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:22.665 "name": "Existed_Raid", 00:24:22.665 "uuid": "314dc701-8658-4f3a-b43b-70613eef63ad", 00:24:22.665 "strip_size_kb": 0, 00:24:22.665 "state": "online", 00:24:22.665 "raid_level": "raid1", 00:24:22.665 "superblock": true, 00:24:22.665 "num_base_bdevs": 2, 00:24:22.665 "num_base_bdevs_discovered": 2, 00:24:22.665 "num_base_bdevs_operational": 2, 00:24:22.665 "base_bdevs_list": [ 00:24:22.665 { 00:24:22.665 "name": "BaseBdev1", 00:24:22.665 "uuid": "1bd3528f-ed05-46ad-a7b1-1f471d7542dc", 00:24:22.665 "is_configured": true, 00:24:22.665 "data_offset": 256, 00:24:22.665 "data_size": 7936 00:24:22.665 }, 00:24:22.665 { 00:24:22.665 "name": "BaseBdev2", 00:24:22.665 "uuid": "4a1652ec-8bee-4f01-a341-c09d411cc29c", 00:24:22.665 "is_configured": true, 00:24:22.665 "data_offset": 256, 00:24:22.665 "data_size": 7936 00:24:22.665 } 00:24:22.665 ] 00:24:22.665 }' 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:22.665 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:22.924 [2024-11-20 05:34:54.666697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.924 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:22.924 "name": "Existed_Raid", 00:24:22.924 "aliases": [ 00:24:22.924 "314dc701-8658-4f3a-b43b-70613eef63ad" 00:24:22.924 ], 00:24:22.924 "product_name": "Raid Volume", 00:24:22.924 "block_size": 4096, 00:24:22.924 "num_blocks": 7936, 00:24:22.924 "uuid": "314dc701-8658-4f3a-b43b-70613eef63ad", 00:24:22.924 "assigned_rate_limits": { 00:24:22.924 "rw_ios_per_sec": 0, 00:24:22.924 "rw_mbytes_per_sec": 0, 00:24:22.924 "r_mbytes_per_sec": 0, 00:24:22.924 "w_mbytes_per_sec": 0 00:24:22.924 }, 00:24:22.924 "claimed": false, 00:24:22.924 "zoned": false, 00:24:22.925 "supported_io_types": { 00:24:22.925 "read": true, 00:24:22.925 "write": true, 00:24:22.925 "unmap": false, 00:24:22.925 "flush": false, 00:24:22.925 "reset": true, 00:24:22.925 "nvme_admin": false, 00:24:22.925 "nvme_io": false, 00:24:22.925 "nvme_io_md": false, 00:24:22.925 "write_zeroes": true, 00:24:22.925 "zcopy": false, 00:24:22.925 "get_zone_info": false, 00:24:22.925 "zone_management": false, 00:24:22.925 "zone_append": false, 00:24:22.925 "compare": false, 00:24:22.925 "compare_and_write": false, 00:24:22.925 "abort": false, 00:24:22.925 "seek_hole": false, 00:24:22.925 "seek_data": false, 00:24:22.925 "copy": false, 00:24:22.925 "nvme_iov_md": false 00:24:22.925 }, 00:24:22.925 "memory_domains": [ 00:24:22.925 { 00:24:22.925 "dma_device_id": "system", 00:24:22.925 "dma_device_type": 1 00:24:22.925 }, 00:24:22.925 { 00:24:22.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.925 "dma_device_type": 2 00:24:22.925 }, 00:24:22.925 { 00:24:22.925 "dma_device_id": "system", 00:24:22.925 "dma_device_type": 1 00:24:22.925 }, 00:24:22.925 { 00:24:22.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.925 "dma_device_type": 2 00:24:22.925 } 00:24:22.925 ], 00:24:22.925 "driver_specific": { 00:24:22.925 "raid": { 00:24:22.925 "uuid": "314dc701-8658-4f3a-b43b-70613eef63ad", 00:24:22.925 "strip_size_kb": 0, 00:24:22.925 "state": "online", 00:24:22.925 "raid_level": "raid1", 00:24:22.925 "superblock": true, 00:24:22.925 "num_base_bdevs": 2, 00:24:22.925 "num_base_bdevs_discovered": 2, 00:24:22.925 "num_base_bdevs_operational": 2, 00:24:22.925 "base_bdevs_list": [ 00:24:22.925 { 00:24:22.925 "name": "BaseBdev1", 00:24:22.925 "uuid": "1bd3528f-ed05-46ad-a7b1-1f471d7542dc", 00:24:22.925 "is_configured": true, 00:24:22.925 "data_offset": 256, 00:24:22.925 "data_size": 7936 00:24:22.925 }, 00:24:22.925 { 00:24:22.925 "name": "BaseBdev2", 00:24:22.925 "uuid": "4a1652ec-8bee-4f01-a341-c09d411cc29c", 00:24:22.925 "is_configured": true, 00:24:22.925 "data_offset": 256, 00:24:22.925 "data_size": 7936 00:24:22.925 } 00:24:22.925 ] 00:24:22.925 } 00:24:22.925 } 00:24:22.925 }' 00:24:22.925 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:22.925 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:22.925 BaseBdev2' 00:24:22.925 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.925 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:22.925 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:22.925 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:22.925 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.925 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.925 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.183 [2024-11-20 05:34:54.822517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:23.183 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.184 "name": "Existed_Raid", 00:24:23.184 "uuid": "314dc701-8658-4f3a-b43b-70613eef63ad", 00:24:23.184 "strip_size_kb": 0, 00:24:23.184 "state": "online", 00:24:23.184 "raid_level": "raid1", 00:24:23.184 "superblock": true, 00:24:23.184 "num_base_bdevs": 2, 00:24:23.184 "num_base_bdevs_discovered": 1, 00:24:23.184 "num_base_bdevs_operational": 1, 00:24:23.184 "base_bdevs_list": [ 00:24:23.184 { 00:24:23.184 "name": null, 00:24:23.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.184 "is_configured": false, 00:24:23.184 "data_offset": 0, 00:24:23.184 "data_size": 7936 00:24:23.184 }, 00:24:23.184 { 00:24:23.184 "name": "BaseBdev2", 00:24:23.184 "uuid": "4a1652ec-8bee-4f01-a341-c09d411cc29c", 00:24:23.184 "is_configured": true, 00:24:23.184 "data_offset": 256, 00:24:23.184 "data_size": 7936 00:24:23.184 } 00:24:23.184 ] 00:24:23.184 }' 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.184 05:34:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.442 [2024-11-20 05:34:55.214056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:23.442 [2024-11-20 05:34:55.214148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:23.442 [2024-11-20 05:34:55.262133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:23.442 [2024-11-20 05:34:55.262183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:23.442 [2024-11-20 05:34:55.262192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.442 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 83535 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 83535 ']' 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 83535 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83535 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:23.699 killing process with pid 83535 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83535' 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 83535 00:24:23.699 [2024-11-20 05:34:55.322761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:23.699 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 83535 00:24:23.699 [2024-11-20 05:34:55.331205] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:24.266 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:24:24.266 00:24:24.266 real 0m3.538s 00:24:24.266 user 0m5.147s 00:24:24.266 sys 0m0.601s 00:24:24.266 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:24.266 05:34:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.266 ************************************ 00:24:24.266 END TEST raid_state_function_test_sb_4k 00:24:24.266 ************************************ 00:24:24.266 05:34:55 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:24:24.266 05:34:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:24.266 05:34:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:24.266 05:34:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:24.266 ************************************ 00:24:24.266 START TEST raid_superblock_test_4k 00:24:24.266 ************************************ 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=83765 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 83765 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 83765 ']' 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:24.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:24.266 05:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.266 [2024-11-20 05:34:56.008867] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:24.266 [2024-11-20 05:34:56.009011] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83765 ] 00:24:24.524 [2024-11-20 05:34:56.171464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.524 [2024-11-20 05:34:56.269620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.781 [2024-11-20 05:34:56.405056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:24.781 [2024-11-20 05:34:56.405111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.347 malloc1 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.347 [2024-11-20 05:34:56.913542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:25.347 [2024-11-20 05:34:56.913600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.347 [2024-11-20 05:34:56.913621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:25.347 [2024-11-20 05:34:56.913631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.347 [2024-11-20 05:34:56.915756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.347 [2024-11-20 05:34:56.915786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:25.347 pt1 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.347 malloc2 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.347 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.347 [2024-11-20 05:34:56.953527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:25.348 [2024-11-20 05:34:56.953577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.348 [2024-11-20 05:34:56.953598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:25.348 [2024-11-20 05:34:56.953606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.348 [2024-11-20 05:34:56.955697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.348 [2024-11-20 05:34:56.955726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:25.348 pt2 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.348 [2024-11-20 05:34:56.961593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:25.348 [2024-11-20 05:34:56.963493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:25.348 [2024-11-20 05:34:56.963666] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:25.348 [2024-11-20 05:34:56.963688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:25.348 [2024-11-20 05:34:56.963943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:25.348 [2024-11-20 05:34:56.964094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:25.348 [2024-11-20 05:34:56.964114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:25.348 [2024-11-20 05:34:56.964259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.348 05:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.348 "name": "raid_bdev1", 00:24:25.348 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:25.348 "strip_size_kb": 0, 00:24:25.348 "state": "online", 00:24:25.348 "raid_level": "raid1", 00:24:25.348 "superblock": true, 00:24:25.348 "num_base_bdevs": 2, 00:24:25.348 "num_base_bdevs_discovered": 2, 00:24:25.348 "num_base_bdevs_operational": 2, 00:24:25.348 "base_bdevs_list": [ 00:24:25.348 { 00:24:25.348 "name": "pt1", 00:24:25.348 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:25.348 "is_configured": true, 00:24:25.348 "data_offset": 256, 00:24:25.348 "data_size": 7936 00:24:25.348 }, 00:24:25.348 { 00:24:25.348 "name": "pt2", 00:24:25.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:25.348 "is_configured": true, 00:24:25.348 "data_offset": 256, 00:24:25.348 "data_size": 7936 00:24:25.348 } 00:24:25.348 ] 00:24:25.348 }' 00:24:25.348 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.348 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.621 [2024-11-20 05:34:57.302102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:25.621 "name": "raid_bdev1", 00:24:25.621 "aliases": [ 00:24:25.621 "27ad4ef3-fc85-449e-b458-f1f9b0da26f5" 00:24:25.621 ], 00:24:25.621 "product_name": "Raid Volume", 00:24:25.621 "block_size": 4096, 00:24:25.621 "num_blocks": 7936, 00:24:25.621 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:25.621 "assigned_rate_limits": { 00:24:25.621 "rw_ios_per_sec": 0, 00:24:25.621 "rw_mbytes_per_sec": 0, 00:24:25.621 "r_mbytes_per_sec": 0, 00:24:25.621 "w_mbytes_per_sec": 0 00:24:25.621 }, 00:24:25.621 "claimed": false, 00:24:25.621 "zoned": false, 00:24:25.621 "supported_io_types": { 00:24:25.621 "read": true, 00:24:25.621 "write": true, 00:24:25.621 "unmap": false, 00:24:25.621 "flush": false, 00:24:25.621 "reset": true, 00:24:25.621 "nvme_admin": false, 00:24:25.621 "nvme_io": false, 00:24:25.621 "nvme_io_md": false, 00:24:25.621 "write_zeroes": true, 00:24:25.621 "zcopy": false, 00:24:25.621 "get_zone_info": false, 00:24:25.621 "zone_management": false, 00:24:25.621 "zone_append": false, 00:24:25.621 "compare": false, 00:24:25.621 "compare_and_write": false, 00:24:25.621 "abort": false, 00:24:25.621 "seek_hole": false, 00:24:25.621 "seek_data": false, 00:24:25.621 "copy": false, 00:24:25.621 "nvme_iov_md": false 00:24:25.621 }, 00:24:25.621 "memory_domains": [ 00:24:25.621 { 00:24:25.621 "dma_device_id": "system", 00:24:25.621 "dma_device_type": 1 00:24:25.621 }, 00:24:25.621 { 00:24:25.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.621 "dma_device_type": 2 00:24:25.621 }, 00:24:25.621 { 00:24:25.621 "dma_device_id": "system", 00:24:25.621 "dma_device_type": 1 00:24:25.621 }, 00:24:25.621 { 00:24:25.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.621 "dma_device_type": 2 00:24:25.621 } 00:24:25.621 ], 00:24:25.621 "driver_specific": { 00:24:25.621 "raid": { 00:24:25.621 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:25.621 "strip_size_kb": 0, 00:24:25.621 "state": "online", 00:24:25.621 "raid_level": "raid1", 00:24:25.621 "superblock": true, 00:24:25.621 "num_base_bdevs": 2, 00:24:25.621 "num_base_bdevs_discovered": 2, 00:24:25.621 "num_base_bdevs_operational": 2, 00:24:25.621 "base_bdevs_list": [ 00:24:25.621 { 00:24:25.621 "name": "pt1", 00:24:25.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:25.621 "is_configured": true, 00:24:25.621 "data_offset": 256, 00:24:25.621 "data_size": 7936 00:24:25.621 }, 00:24:25.621 { 00:24:25.621 "name": "pt2", 00:24:25.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:25.621 "is_configured": true, 00:24:25.621 "data_offset": 256, 00:24:25.621 "data_size": 7936 00:24:25.621 } 00:24:25.621 ] 00:24:25.621 } 00:24:25.621 } 00:24:25.621 }' 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:25.621 pt2' 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.621 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.880 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:25.880 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:25.880 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:25.880 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:25.880 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.880 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.880 [2024-11-20 05:34:57.465983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:25.880 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.880 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27ad4ef3-fc85-449e-b458-f1f9b0da26f5 00:24:25.880 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 27ad4ef3-fc85-449e-b458-f1f9b0da26f5 ']' 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 [2024-11-20 05:34:57.497679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:25.881 [2024-11-20 05:34:57.497708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:25.881 [2024-11-20 05:34:57.497782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:25.881 [2024-11-20 05:34:57.497841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:25.881 [2024-11-20 05:34:57.497853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 [2024-11-20 05:34:57.597744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:25.881 [2024-11-20 05:34:57.599626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:25.881 [2024-11-20 05:34:57.599695] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:25.881 [2024-11-20 05:34:57.599743] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:25.881 [2024-11-20 05:34:57.599759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:25.881 [2024-11-20 05:34:57.599769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:25.881 request: 00:24:25.881 { 00:24:25.881 "name": "raid_bdev1", 00:24:25.881 "raid_level": "raid1", 00:24:25.881 "base_bdevs": [ 00:24:25.881 "malloc1", 00:24:25.881 "malloc2" 00:24:25.881 ], 00:24:25.881 "superblock": false, 00:24:25.881 "method": "bdev_raid_create", 00:24:25.881 "req_id": 1 00:24:25.881 } 00:24:25.881 Got JSON-RPC error response 00:24:25.881 response: 00:24:25.881 { 00:24:25.881 "code": -17, 00:24:25.881 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:25.881 } 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 [2024-11-20 05:34:57.637742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:25.881 [2024-11-20 05:34:57.637801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.881 [2024-11-20 05:34:57.637818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:25.881 [2024-11-20 05:34:57.637828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.881 [2024-11-20 05:34:57.640001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.881 [2024-11-20 05:34:57.640039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:25.881 [2024-11-20 05:34:57.640114] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:25.881 [2024-11-20 05:34:57.640174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:25.881 pt1 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.881 "name": "raid_bdev1", 00:24:25.881 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:25.881 "strip_size_kb": 0, 00:24:25.881 "state": "configuring", 00:24:25.881 "raid_level": "raid1", 00:24:25.881 "superblock": true, 00:24:25.881 "num_base_bdevs": 2, 00:24:25.881 "num_base_bdevs_discovered": 1, 00:24:25.881 "num_base_bdevs_operational": 2, 00:24:25.881 "base_bdevs_list": [ 00:24:25.881 { 00:24:25.881 "name": "pt1", 00:24:25.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:25.881 "is_configured": true, 00:24:25.881 "data_offset": 256, 00:24:25.881 "data_size": 7936 00:24:25.881 }, 00:24:25.881 { 00:24:25.881 "name": null, 00:24:25.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:25.881 "is_configured": false, 00:24:25.881 "data_offset": 256, 00:24:25.881 "data_size": 7936 00:24:25.881 } 00:24:25.881 ] 00:24:25.881 }' 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.881 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.141 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:26.141 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:26.141 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:26.141 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:26.141 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.141 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.141 [2024-11-20 05:34:57.973843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:26.141 [2024-11-20 05:34:57.973914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.141 [2024-11-20 05:34:57.973933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:26.141 [2024-11-20 05:34:57.973945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.141 [2024-11-20 05:34:57.974390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.141 [2024-11-20 05:34:57.974417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:26.141 [2024-11-20 05:34:57.974491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:26.141 [2024-11-20 05:34:57.974515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:26.400 [2024-11-20 05:34:57.974629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:26.400 [2024-11-20 05:34:57.974646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:26.400 [2024-11-20 05:34:57.974881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:26.400 [2024-11-20 05:34:57.975022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:26.400 [2024-11-20 05:34:57.975035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:26.400 [2024-11-20 05:34:57.975168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.400 pt2 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.400 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.401 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.401 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.401 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.401 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.401 05:34:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.401 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.401 05:34:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.401 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.401 "name": "raid_bdev1", 00:24:26.401 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:26.401 "strip_size_kb": 0, 00:24:26.401 "state": "online", 00:24:26.401 "raid_level": "raid1", 00:24:26.401 "superblock": true, 00:24:26.401 "num_base_bdevs": 2, 00:24:26.401 "num_base_bdevs_discovered": 2, 00:24:26.401 "num_base_bdevs_operational": 2, 00:24:26.401 "base_bdevs_list": [ 00:24:26.401 { 00:24:26.401 "name": "pt1", 00:24:26.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:26.401 "is_configured": true, 00:24:26.401 "data_offset": 256, 00:24:26.401 "data_size": 7936 00:24:26.401 }, 00:24:26.401 { 00:24:26.401 "name": "pt2", 00:24:26.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:26.401 "is_configured": true, 00:24:26.401 "data_offset": 256, 00:24:26.401 "data_size": 7936 00:24:26.401 } 00:24:26.401 ] 00:24:26.401 }' 00:24:26.401 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.401 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:26.660 [2024-11-20 05:34:58.298169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:26.660 "name": "raid_bdev1", 00:24:26.660 "aliases": [ 00:24:26.660 "27ad4ef3-fc85-449e-b458-f1f9b0da26f5" 00:24:26.660 ], 00:24:26.660 "product_name": "Raid Volume", 00:24:26.660 "block_size": 4096, 00:24:26.660 "num_blocks": 7936, 00:24:26.660 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:26.660 "assigned_rate_limits": { 00:24:26.660 "rw_ios_per_sec": 0, 00:24:26.660 "rw_mbytes_per_sec": 0, 00:24:26.660 "r_mbytes_per_sec": 0, 00:24:26.660 "w_mbytes_per_sec": 0 00:24:26.660 }, 00:24:26.660 "claimed": false, 00:24:26.660 "zoned": false, 00:24:26.660 "supported_io_types": { 00:24:26.660 "read": true, 00:24:26.660 "write": true, 00:24:26.660 "unmap": false, 00:24:26.660 "flush": false, 00:24:26.660 "reset": true, 00:24:26.660 "nvme_admin": false, 00:24:26.660 "nvme_io": false, 00:24:26.660 "nvme_io_md": false, 00:24:26.660 "write_zeroes": true, 00:24:26.660 "zcopy": false, 00:24:26.660 "get_zone_info": false, 00:24:26.660 "zone_management": false, 00:24:26.660 "zone_append": false, 00:24:26.660 "compare": false, 00:24:26.660 "compare_and_write": false, 00:24:26.660 "abort": false, 00:24:26.660 "seek_hole": false, 00:24:26.660 "seek_data": false, 00:24:26.660 "copy": false, 00:24:26.660 "nvme_iov_md": false 00:24:26.660 }, 00:24:26.660 "memory_domains": [ 00:24:26.660 { 00:24:26.660 "dma_device_id": "system", 00:24:26.660 "dma_device_type": 1 00:24:26.660 }, 00:24:26.660 { 00:24:26.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.660 "dma_device_type": 2 00:24:26.660 }, 00:24:26.660 { 00:24:26.660 "dma_device_id": "system", 00:24:26.660 "dma_device_type": 1 00:24:26.660 }, 00:24:26.660 { 00:24:26.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.660 "dma_device_type": 2 00:24:26.660 } 00:24:26.660 ], 00:24:26.660 "driver_specific": { 00:24:26.660 "raid": { 00:24:26.660 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:26.660 "strip_size_kb": 0, 00:24:26.660 "state": "online", 00:24:26.660 "raid_level": "raid1", 00:24:26.660 "superblock": true, 00:24:26.660 "num_base_bdevs": 2, 00:24:26.660 "num_base_bdevs_discovered": 2, 00:24:26.660 "num_base_bdevs_operational": 2, 00:24:26.660 "base_bdevs_list": [ 00:24:26.660 { 00:24:26.660 "name": "pt1", 00:24:26.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:26.660 "is_configured": true, 00:24:26.660 "data_offset": 256, 00:24:26.660 "data_size": 7936 00:24:26.660 }, 00:24:26.660 { 00:24:26.660 "name": "pt2", 00:24:26.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:26.660 "is_configured": true, 00:24:26.660 "data_offset": 256, 00:24:26.660 "data_size": 7936 00:24:26.660 } 00:24:26.660 ] 00:24:26.660 } 00:24:26.660 } 00:24:26.660 }' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:26.660 pt2' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.660 [2024-11-20 05:34:58.462187] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 27ad4ef3-fc85-449e-b458-f1f9b0da26f5 '!=' 27ad4ef3-fc85-449e-b458-f1f9b0da26f5 ']' 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:26.660 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.661 [2024-11-20 05:34:58.481963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.661 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.919 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.919 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.919 "name": "raid_bdev1", 00:24:26.919 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:26.919 "strip_size_kb": 0, 00:24:26.919 "state": "online", 00:24:26.919 "raid_level": "raid1", 00:24:26.919 "superblock": true, 00:24:26.919 "num_base_bdevs": 2, 00:24:26.919 "num_base_bdevs_discovered": 1, 00:24:26.919 "num_base_bdevs_operational": 1, 00:24:26.919 "base_bdevs_list": [ 00:24:26.919 { 00:24:26.919 "name": null, 00:24:26.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.919 "is_configured": false, 00:24:26.919 "data_offset": 0, 00:24:26.919 "data_size": 7936 00:24:26.919 }, 00:24:26.919 { 00:24:26.919 "name": "pt2", 00:24:26.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:26.920 "is_configured": true, 00:24:26.920 "data_offset": 256, 00:24:26.920 "data_size": 7936 00:24:26.920 } 00:24:26.920 ] 00:24:26.920 }' 00:24:26.920 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.920 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.178 [2024-11-20 05:34:58.802008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:27.178 [2024-11-20 05:34:58.802039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:27.178 [2024-11-20 05:34:58.802104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:27.178 [2024-11-20 05:34:58.802149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:27.178 [2024-11-20 05:34:58.802159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.178 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.178 [2024-11-20 05:34:58.854005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:27.178 [2024-11-20 05:34:58.854062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.178 [2024-11-20 05:34:58.854077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:27.178 [2024-11-20 05:34:58.854088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.178 [2024-11-20 05:34:58.856257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.178 [2024-11-20 05:34:58.856294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:27.179 [2024-11-20 05:34:58.856376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:27.179 [2024-11-20 05:34:58.856420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:27.179 [2024-11-20 05:34:58.856512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:27.179 [2024-11-20 05:34:58.856525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:27.179 [2024-11-20 05:34:58.856764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:27.179 [2024-11-20 05:34:58.856902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:27.179 [2024-11-20 05:34:58.856916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:27.179 [2024-11-20 05:34:58.857047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.179 pt2 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.179 "name": "raid_bdev1", 00:24:27.179 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:27.179 "strip_size_kb": 0, 00:24:27.179 "state": "online", 00:24:27.179 "raid_level": "raid1", 00:24:27.179 "superblock": true, 00:24:27.179 "num_base_bdevs": 2, 00:24:27.179 "num_base_bdevs_discovered": 1, 00:24:27.179 "num_base_bdevs_operational": 1, 00:24:27.179 "base_bdevs_list": [ 00:24:27.179 { 00:24:27.179 "name": null, 00:24:27.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.179 "is_configured": false, 00:24:27.179 "data_offset": 256, 00:24:27.179 "data_size": 7936 00:24:27.179 }, 00:24:27.179 { 00:24:27.179 "name": "pt2", 00:24:27.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:27.179 "is_configured": true, 00:24:27.179 "data_offset": 256, 00:24:27.179 "data_size": 7936 00:24:27.179 } 00:24:27.179 ] 00:24:27.179 }' 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.179 05:34:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.520 [2024-11-20 05:34:59.190047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:27.520 [2024-11-20 05:34:59.190075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:27.520 [2024-11-20 05:34:59.190128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:27.520 [2024-11-20 05:34:59.190167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:27.520 [2024-11-20 05:34:59.190175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.520 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.520 [2024-11-20 05:34:59.234074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:27.520 [2024-11-20 05:34:59.234121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.520 [2024-11-20 05:34:59.234135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:27.521 [2024-11-20 05:34:59.234142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.521 [2024-11-20 05:34:59.235959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.521 [2024-11-20 05:34:59.235984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:27.521 [2024-11-20 05:34:59.236047] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:27.521 [2024-11-20 05:34:59.236083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:27.521 [2024-11-20 05:34:59.236184] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:27.521 [2024-11-20 05:34:59.236196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:27.521 [2024-11-20 05:34:59.236209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:27.521 [2024-11-20 05:34:59.236253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:27.521 [2024-11-20 05:34:59.236308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:27.521 [2024-11-20 05:34:59.236315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:27.521 [2024-11-20 05:34:59.236533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:27.521 [2024-11-20 05:34:59.236644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:27.521 [2024-11-20 05:34:59.236656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:27.521 [2024-11-20 05:34:59.236778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.521 pt1 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.521 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.522 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.522 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.522 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.522 "name": "raid_bdev1", 00:24:27.522 "uuid": "27ad4ef3-fc85-449e-b458-f1f9b0da26f5", 00:24:27.522 "strip_size_kb": 0, 00:24:27.522 "state": "online", 00:24:27.522 "raid_level": "raid1", 00:24:27.522 "superblock": true, 00:24:27.522 "num_base_bdevs": 2, 00:24:27.522 "num_base_bdevs_discovered": 1, 00:24:27.522 "num_base_bdevs_operational": 1, 00:24:27.522 "base_bdevs_list": [ 00:24:27.522 { 00:24:27.522 "name": null, 00:24:27.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.522 "is_configured": false, 00:24:27.522 "data_offset": 256, 00:24:27.522 "data_size": 7936 00:24:27.522 }, 00:24:27.522 { 00:24:27.522 "name": "pt2", 00:24:27.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:27.522 "is_configured": true, 00:24:27.522 "data_offset": 256, 00:24:27.522 "data_size": 7936 00:24:27.522 } 00:24:27.522 ] 00:24:27.522 }' 00:24:27.522 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.522 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.783 [2024-11-20 05:34:59.578331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 27ad4ef3-fc85-449e-b458-f1f9b0da26f5 '!=' 27ad4ef3-fc85-449e-b458-f1f9b0da26f5 ']' 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 83765 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 83765 ']' 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 83765 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:27.783 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83765 00:24:28.042 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:28.042 killing process with pid 83765 00:24:28.042 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:28.042 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83765' 00:24:28.042 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 83765 00:24:28.042 [2024-11-20 05:34:59.633285] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:28.042 [2024-11-20 05:34:59.633355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.042 05:34:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 83765 00:24:28.042 [2024-11-20 05:34:59.633403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:28.042 [2024-11-20 05:34:59.633418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:28.042 [2024-11-20 05:34:59.735604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:28.607 05:35:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:24:28.607 00:24:28.607 real 0m4.358s 00:24:28.607 user 0m6.702s 00:24:28.607 sys 0m0.755s 00:24:28.607 05:35:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:28.607 05:35:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:28.607 ************************************ 00:24:28.607 END TEST raid_superblock_test_4k 00:24:28.607 ************************************ 00:24:28.607 05:35:00 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:24:28.607 05:35:00 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:24:28.607 05:35:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:28.607 05:35:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:28.607 05:35:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:28.607 ************************************ 00:24:28.607 START TEST raid_rebuild_test_sb_4k 00:24:28.607 ************************************ 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=84077 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 84077 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 84077 ']' 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:28.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:28.607 05:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:28.607 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:28.607 Zero copy mechanism will not be used. 00:24:28.607 [2024-11-20 05:35:00.422496] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:28.607 [2024-11-20 05:35:00.422613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84077 ] 00:24:28.868 [2024-11-20 05:35:00.572937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.868 [2024-11-20 05:35:00.656745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.129 [2024-11-20 05:35:00.767122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.129 [2024-11-20 05:35:00.767176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.695 BaseBdev1_malloc 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.695 [2024-11-20 05:35:01.258624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:29.695 [2024-11-20 05:35:01.258687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.695 [2024-11-20 05:35:01.258704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:29.695 [2024-11-20 05:35:01.258713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.695 [2024-11-20 05:35:01.260511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.695 [2024-11-20 05:35:01.260541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:29.695 BaseBdev1 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.695 BaseBdev2_malloc 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.695 [2024-11-20 05:35:01.294574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:29.695 [2024-11-20 05:35:01.294631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.695 [2024-11-20 05:35:01.294646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:29.695 [2024-11-20 05:35:01.294656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.695 [2024-11-20 05:35:01.296488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.695 [2024-11-20 05:35:01.296516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:29.695 BaseBdev2 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.695 spare_malloc 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.695 spare_delay 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.695 [2024-11-20 05:35:01.354033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:29.695 [2024-11-20 05:35:01.354094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.695 [2024-11-20 05:35:01.354112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:29.695 [2024-11-20 05:35:01.354121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.695 [2024-11-20 05:35:01.355951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.695 [2024-11-20 05:35:01.355982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:29.695 spare 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.695 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.695 [2024-11-20 05:35:01.362077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:29.695 [2024-11-20 05:35:01.363624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:29.695 [2024-11-20 05:35:01.363772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:29.695 [2024-11-20 05:35:01.363788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:29.695 [2024-11-20 05:35:01.364010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:29.696 [2024-11-20 05:35:01.364144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:29.696 [2024-11-20 05:35:01.364166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:29.696 [2024-11-20 05:35:01.364291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:29.696 "name": "raid_bdev1", 00:24:29.696 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:29.696 "strip_size_kb": 0, 00:24:29.696 "state": "online", 00:24:29.696 "raid_level": "raid1", 00:24:29.696 "superblock": true, 00:24:29.696 "num_base_bdevs": 2, 00:24:29.696 "num_base_bdevs_discovered": 2, 00:24:29.696 "num_base_bdevs_operational": 2, 00:24:29.696 "base_bdevs_list": [ 00:24:29.696 { 00:24:29.696 "name": "BaseBdev1", 00:24:29.696 "uuid": "798f1d21-6335-511e-9ded-3eed0d1b74eb", 00:24:29.696 "is_configured": true, 00:24:29.696 "data_offset": 256, 00:24:29.696 "data_size": 7936 00:24:29.696 }, 00:24:29.696 { 00:24:29.696 "name": "BaseBdev2", 00:24:29.696 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:29.696 "is_configured": true, 00:24:29.696 "data_offset": 256, 00:24:29.696 "data_size": 7936 00:24:29.696 } 00:24:29.696 ] 00:24:29.696 }' 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:29.696 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.956 [2024-11-20 05:35:01.706381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:29.956 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:30.218 [2024-11-20 05:35:01.954201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:30.218 /dev/nbd0 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:30.218 1+0 records in 00:24:30.218 1+0 records out 00:24:30.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206391 s, 19.8 MB/s 00:24:30.218 05:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.218 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:24:30.218 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.218 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:30.218 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:24:30.218 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:30.218 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:30.218 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:30.218 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:30.218 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:31.153 7936+0 records in 00:24:31.153 7936+0 records out 00:24:31.153 32505856 bytes (33 MB, 31 MiB) copied, 0.717692 s, 45.3 MB/s 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:31.153 [2024-11-20 05:35:02.966523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.153 [2024-11-20 05:35:02.978623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.153 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.411 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.411 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.411 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.411 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.411 05:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.411 05:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.411 "name": "raid_bdev1", 00:24:31.411 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:31.411 "strip_size_kb": 0, 00:24:31.411 "state": "online", 00:24:31.411 "raid_level": "raid1", 00:24:31.411 "superblock": true, 00:24:31.411 "num_base_bdevs": 2, 00:24:31.411 "num_base_bdevs_discovered": 1, 00:24:31.411 "num_base_bdevs_operational": 1, 00:24:31.411 "base_bdevs_list": [ 00:24:31.411 { 00:24:31.411 "name": null, 00:24:31.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.412 "is_configured": false, 00:24:31.412 "data_offset": 0, 00:24:31.412 "data_size": 7936 00:24:31.412 }, 00:24:31.412 { 00:24:31.412 "name": "BaseBdev2", 00:24:31.412 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:31.412 "is_configured": true, 00:24:31.412 "data_offset": 256, 00:24:31.412 "data_size": 7936 00:24:31.412 } 00:24:31.412 ] 00:24:31.412 }' 00:24:31.412 05:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.412 05:35:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.670 05:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:31.670 05:35:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.670 05:35:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.670 [2024-11-20 05:35:03.294669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:31.670 [2024-11-20 05:35:03.304331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:31.670 05:35:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.670 05:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:31.670 [2024-11-20 05:35:03.305924] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:32.603 "name": "raid_bdev1", 00:24:32.603 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:32.603 "strip_size_kb": 0, 00:24:32.603 "state": "online", 00:24:32.603 "raid_level": "raid1", 00:24:32.603 "superblock": true, 00:24:32.603 "num_base_bdevs": 2, 00:24:32.603 "num_base_bdevs_discovered": 2, 00:24:32.603 "num_base_bdevs_operational": 2, 00:24:32.603 "process": { 00:24:32.603 "type": "rebuild", 00:24:32.603 "target": "spare", 00:24:32.603 "progress": { 00:24:32.603 "blocks": 2560, 00:24:32.603 "percent": 32 00:24:32.603 } 00:24:32.603 }, 00:24:32.603 "base_bdevs_list": [ 00:24:32.603 { 00:24:32.603 "name": "spare", 00:24:32.603 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:32.603 "is_configured": true, 00:24:32.603 "data_offset": 256, 00:24:32.603 "data_size": 7936 00:24:32.603 }, 00:24:32.603 { 00:24:32.603 "name": "BaseBdev2", 00:24:32.603 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:32.603 "is_configured": true, 00:24:32.603 "data_offset": 256, 00:24:32.603 "data_size": 7936 00:24:32.603 } 00:24:32.603 ] 00:24:32.603 }' 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.603 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.603 [2024-11-20 05:35:04.420170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:32.864 [2024-11-20 05:35:04.511407] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:32.864 [2024-11-20 05:35:04.511483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.864 [2024-11-20 05:35:04.511496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:32.864 [2024-11-20 05:35:04.511503] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.864 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.864 "name": "raid_bdev1", 00:24:32.864 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:32.864 "strip_size_kb": 0, 00:24:32.864 "state": "online", 00:24:32.864 "raid_level": "raid1", 00:24:32.864 "superblock": true, 00:24:32.864 "num_base_bdevs": 2, 00:24:32.864 "num_base_bdevs_discovered": 1, 00:24:32.864 "num_base_bdevs_operational": 1, 00:24:32.864 "base_bdevs_list": [ 00:24:32.864 { 00:24:32.864 "name": null, 00:24:32.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.864 "is_configured": false, 00:24:32.864 "data_offset": 0, 00:24:32.864 "data_size": 7936 00:24:32.864 }, 00:24:32.864 { 00:24:32.864 "name": "BaseBdev2", 00:24:32.864 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:32.864 "is_configured": true, 00:24:32.864 "data_offset": 256, 00:24:32.864 "data_size": 7936 00:24:32.865 } 00:24:32.865 ] 00:24:32.865 }' 00:24:32.865 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.865 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:33.123 "name": "raid_bdev1", 00:24:33.123 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:33.123 "strip_size_kb": 0, 00:24:33.123 "state": "online", 00:24:33.123 "raid_level": "raid1", 00:24:33.123 "superblock": true, 00:24:33.123 "num_base_bdevs": 2, 00:24:33.123 "num_base_bdevs_discovered": 1, 00:24:33.123 "num_base_bdevs_operational": 1, 00:24:33.123 "base_bdevs_list": [ 00:24:33.123 { 00:24:33.123 "name": null, 00:24:33.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.123 "is_configured": false, 00:24:33.123 "data_offset": 0, 00:24:33.123 "data_size": 7936 00:24:33.123 }, 00:24:33.123 { 00:24:33.123 "name": "BaseBdev2", 00:24:33.123 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:33.123 "is_configured": true, 00:24:33.123 "data_offset": 256, 00:24:33.123 "data_size": 7936 00:24:33.123 } 00:24:33.123 ] 00:24:33.123 }' 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.123 [2024-11-20 05:35:04.930332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:33.123 [2024-11-20 05:35:04.939311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.123 05:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:33.124 [2024-11-20 05:35:04.940875] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:34.499 "name": "raid_bdev1", 00:24:34.499 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:34.499 "strip_size_kb": 0, 00:24:34.499 "state": "online", 00:24:34.499 "raid_level": "raid1", 00:24:34.499 "superblock": true, 00:24:34.499 "num_base_bdevs": 2, 00:24:34.499 "num_base_bdevs_discovered": 2, 00:24:34.499 "num_base_bdevs_operational": 2, 00:24:34.499 "process": { 00:24:34.499 "type": "rebuild", 00:24:34.499 "target": "spare", 00:24:34.499 "progress": { 00:24:34.499 "blocks": 2560, 00:24:34.499 "percent": 32 00:24:34.499 } 00:24:34.499 }, 00:24:34.499 "base_bdevs_list": [ 00:24:34.499 { 00:24:34.499 "name": "spare", 00:24:34.499 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:34.499 "is_configured": true, 00:24:34.499 "data_offset": 256, 00:24:34.499 "data_size": 7936 00:24:34.499 }, 00:24:34.499 { 00:24:34.499 "name": "BaseBdev2", 00:24:34.499 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:34.499 "is_configured": true, 00:24:34.499 "data_offset": 256, 00:24:34.499 "data_size": 7936 00:24:34.499 } 00:24:34.499 ] 00:24:34.499 }' 00:24:34.499 05:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:34.499 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=541 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:34.499 "name": "raid_bdev1", 00:24:34.499 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:34.499 "strip_size_kb": 0, 00:24:34.499 "state": "online", 00:24:34.499 "raid_level": "raid1", 00:24:34.499 "superblock": true, 00:24:34.499 "num_base_bdevs": 2, 00:24:34.499 "num_base_bdevs_discovered": 2, 00:24:34.499 "num_base_bdevs_operational": 2, 00:24:34.499 "process": { 00:24:34.499 "type": "rebuild", 00:24:34.499 "target": "spare", 00:24:34.499 "progress": { 00:24:34.499 "blocks": 2816, 00:24:34.499 "percent": 35 00:24:34.499 } 00:24:34.499 }, 00:24:34.499 "base_bdevs_list": [ 00:24:34.499 { 00:24:34.499 "name": "spare", 00:24:34.499 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:34.499 "is_configured": true, 00:24:34.499 "data_offset": 256, 00:24:34.499 "data_size": 7936 00:24:34.499 }, 00:24:34.499 { 00:24:34.499 "name": "BaseBdev2", 00:24:34.499 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:34.499 "is_configured": true, 00:24:34.499 "data_offset": 256, 00:24:34.499 "data_size": 7936 00:24:34.499 } 00:24:34.499 ] 00:24:34.499 }' 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:34.499 05:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:35.435 "name": "raid_bdev1", 00:24:35.435 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:35.435 "strip_size_kb": 0, 00:24:35.435 "state": "online", 00:24:35.435 "raid_level": "raid1", 00:24:35.435 "superblock": true, 00:24:35.435 "num_base_bdevs": 2, 00:24:35.435 "num_base_bdevs_discovered": 2, 00:24:35.435 "num_base_bdevs_operational": 2, 00:24:35.435 "process": { 00:24:35.435 "type": "rebuild", 00:24:35.435 "target": "spare", 00:24:35.435 "progress": { 00:24:35.435 "blocks": 5376, 00:24:35.435 "percent": 67 00:24:35.435 } 00:24:35.435 }, 00:24:35.435 "base_bdevs_list": [ 00:24:35.435 { 00:24:35.435 "name": "spare", 00:24:35.435 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:35.435 "is_configured": true, 00:24:35.435 "data_offset": 256, 00:24:35.435 "data_size": 7936 00:24:35.435 }, 00:24:35.435 { 00:24:35.435 "name": "BaseBdev2", 00:24:35.435 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:35.435 "is_configured": true, 00:24:35.435 "data_offset": 256, 00:24:35.435 "data_size": 7936 00:24:35.435 } 00:24:35.435 ] 00:24:35.435 }' 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.435 05:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:36.376 [2024-11-20 05:35:08.054537] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:36.376 [2024-11-20 05:35:08.054610] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:36.376 [2024-11-20 05:35:08.054701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:36.639 "name": "raid_bdev1", 00:24:36.639 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:36.639 "strip_size_kb": 0, 00:24:36.639 "state": "online", 00:24:36.639 "raid_level": "raid1", 00:24:36.639 "superblock": true, 00:24:36.639 "num_base_bdevs": 2, 00:24:36.639 "num_base_bdevs_discovered": 2, 00:24:36.639 "num_base_bdevs_operational": 2, 00:24:36.639 "base_bdevs_list": [ 00:24:36.639 { 00:24:36.639 "name": "spare", 00:24:36.639 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:36.639 "is_configured": true, 00:24:36.639 "data_offset": 256, 00:24:36.639 "data_size": 7936 00:24:36.639 }, 00:24:36.639 { 00:24:36.639 "name": "BaseBdev2", 00:24:36.639 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:36.639 "is_configured": true, 00:24:36.639 "data_offset": 256, 00:24:36.639 "data_size": 7936 00:24:36.639 } 00:24:36.639 ] 00:24:36.639 }' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:36.639 "name": "raid_bdev1", 00:24:36.639 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:36.639 "strip_size_kb": 0, 00:24:36.639 "state": "online", 00:24:36.639 "raid_level": "raid1", 00:24:36.639 "superblock": true, 00:24:36.639 "num_base_bdevs": 2, 00:24:36.639 "num_base_bdevs_discovered": 2, 00:24:36.639 "num_base_bdevs_operational": 2, 00:24:36.639 "base_bdevs_list": [ 00:24:36.639 { 00:24:36.639 "name": "spare", 00:24:36.639 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:36.639 "is_configured": true, 00:24:36.639 "data_offset": 256, 00:24:36.639 "data_size": 7936 00:24:36.639 }, 00:24:36.639 { 00:24:36.639 "name": "BaseBdev2", 00:24:36.639 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:36.639 "is_configured": true, 00:24:36.639 "data_offset": 256, 00:24:36.639 "data_size": 7936 00:24:36.639 } 00:24:36.639 ] 00:24:36.639 }' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:36.639 "name": "raid_bdev1", 00:24:36.639 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:36.639 "strip_size_kb": 0, 00:24:36.639 "state": "online", 00:24:36.639 "raid_level": "raid1", 00:24:36.639 "superblock": true, 00:24:36.639 "num_base_bdevs": 2, 00:24:36.639 "num_base_bdevs_discovered": 2, 00:24:36.639 "num_base_bdevs_operational": 2, 00:24:36.639 "base_bdevs_list": [ 00:24:36.639 { 00:24:36.639 "name": "spare", 00:24:36.639 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:36.639 "is_configured": true, 00:24:36.639 "data_offset": 256, 00:24:36.639 "data_size": 7936 00:24:36.639 }, 00:24:36.639 { 00:24:36.639 "name": "BaseBdev2", 00:24:36.639 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:36.639 "is_configured": true, 00:24:36.639 "data_offset": 256, 00:24:36.639 "data_size": 7936 00:24:36.639 } 00:24:36.639 ] 00:24:36.639 }' 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:36.639 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.899 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:36.899 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.899 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.899 [2024-11-20 05:35:08.717394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:36.899 [2024-11-20 05:35:08.717422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:36.899 [2024-11-20 05:35:08.717481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:36.899 [2024-11-20 05:35:08.717541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:36.899 [2024-11-20 05:35:08.717556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:36.899 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.899 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:24:36.899 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.899 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.899 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:37.157 /dev/nbd0 00:24:37.157 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.418 1+0 records in 00:24:37.418 1+0 records out 00:24:37.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303155 s, 13.5 MB/s 00:24:37.418 05:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:37.418 /dev/nbd1 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:37.418 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.418 1+0 records in 00:24:37.418 1+0 records out 00:24:37.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393004 s, 10.4 MB/s 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:37.678 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:37.939 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.200 [2024-11-20 05:35:09.848503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:38.200 [2024-11-20 05:35:09.848552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.200 [2024-11-20 05:35:09.848569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:38.200 [2024-11-20 05:35:09.848577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.200 [2024-11-20 05:35:09.850393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.200 [2024-11-20 05:35:09.850420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:38.200 [2024-11-20 05:35:09.850493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:38.200 [2024-11-20 05:35:09.850534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:38.200 [2024-11-20 05:35:09.850643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:38.200 spare 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.200 [2024-11-20 05:35:09.950729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:38.200 [2024-11-20 05:35:09.950772] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:38.200 [2024-11-20 05:35:09.951031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:24:38.200 [2024-11-20 05:35:09.951185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:38.200 [2024-11-20 05:35:09.951199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:38.200 [2024-11-20 05:35:09.951339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.200 "name": "raid_bdev1", 00:24:38.200 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:38.200 "strip_size_kb": 0, 00:24:38.200 "state": "online", 00:24:38.200 "raid_level": "raid1", 00:24:38.200 "superblock": true, 00:24:38.200 "num_base_bdevs": 2, 00:24:38.200 "num_base_bdevs_discovered": 2, 00:24:38.200 "num_base_bdevs_operational": 2, 00:24:38.200 "base_bdevs_list": [ 00:24:38.200 { 00:24:38.200 "name": "spare", 00:24:38.200 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:38.200 "is_configured": true, 00:24:38.200 "data_offset": 256, 00:24:38.200 "data_size": 7936 00:24:38.200 }, 00:24:38.200 { 00:24:38.200 "name": "BaseBdev2", 00:24:38.200 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:38.200 "is_configured": true, 00:24:38.200 "data_offset": 256, 00:24:38.200 "data_size": 7936 00:24:38.200 } 00:24:38.200 ] 00:24:38.200 }' 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.200 05:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.462 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:38.723 "name": "raid_bdev1", 00:24:38.723 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:38.723 "strip_size_kb": 0, 00:24:38.723 "state": "online", 00:24:38.723 "raid_level": "raid1", 00:24:38.723 "superblock": true, 00:24:38.723 "num_base_bdevs": 2, 00:24:38.723 "num_base_bdevs_discovered": 2, 00:24:38.723 "num_base_bdevs_operational": 2, 00:24:38.723 "base_bdevs_list": [ 00:24:38.723 { 00:24:38.723 "name": "spare", 00:24:38.723 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:38.723 "is_configured": true, 00:24:38.723 "data_offset": 256, 00:24:38.723 "data_size": 7936 00:24:38.723 }, 00:24:38.723 { 00:24:38.723 "name": "BaseBdev2", 00:24:38.723 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:38.723 "is_configured": true, 00:24:38.723 "data_offset": 256, 00:24:38.723 "data_size": 7936 00:24:38.723 } 00:24:38.723 ] 00:24:38.723 }' 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.723 [2024-11-20 05:35:10.392661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.723 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.723 "name": "raid_bdev1", 00:24:38.723 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:38.723 "strip_size_kb": 0, 00:24:38.723 "state": "online", 00:24:38.723 "raid_level": "raid1", 00:24:38.723 "superblock": true, 00:24:38.723 "num_base_bdevs": 2, 00:24:38.723 "num_base_bdevs_discovered": 1, 00:24:38.723 "num_base_bdevs_operational": 1, 00:24:38.723 "base_bdevs_list": [ 00:24:38.723 { 00:24:38.723 "name": null, 00:24:38.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.723 "is_configured": false, 00:24:38.723 "data_offset": 0, 00:24:38.723 "data_size": 7936 00:24:38.723 }, 00:24:38.723 { 00:24:38.724 "name": "BaseBdev2", 00:24:38.724 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:38.724 "is_configured": true, 00:24:38.724 "data_offset": 256, 00:24:38.724 "data_size": 7936 00:24:38.724 } 00:24:38.724 ] 00:24:38.724 }' 00:24:38.724 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.724 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.982 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:38.982 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.982 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.982 [2024-11-20 05:35:10.708715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:38.982 [2024-11-20 05:35:10.708876] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:38.982 [2024-11-20 05:35:10.708889] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:38.982 [2024-11-20 05:35:10.708915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:38.982 [2024-11-20 05:35:10.717605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:24:38.982 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.982 05:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:38.982 [2024-11-20 05:35:10.719149] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:39.915 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.915 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:39.915 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:39.915 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:39.916 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:39.916 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.916 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.916 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.916 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.916 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:40.174 "name": "raid_bdev1", 00:24:40.174 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:40.174 "strip_size_kb": 0, 00:24:40.174 "state": "online", 00:24:40.174 "raid_level": "raid1", 00:24:40.174 "superblock": true, 00:24:40.174 "num_base_bdevs": 2, 00:24:40.174 "num_base_bdevs_discovered": 2, 00:24:40.174 "num_base_bdevs_operational": 2, 00:24:40.174 "process": { 00:24:40.174 "type": "rebuild", 00:24:40.174 "target": "spare", 00:24:40.174 "progress": { 00:24:40.174 "blocks": 2560, 00:24:40.174 "percent": 32 00:24:40.174 } 00:24:40.174 }, 00:24:40.174 "base_bdevs_list": [ 00:24:40.174 { 00:24:40.174 "name": "spare", 00:24:40.174 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:40.174 "is_configured": true, 00:24:40.174 "data_offset": 256, 00:24:40.174 "data_size": 7936 00:24:40.174 }, 00:24:40.174 { 00:24:40.174 "name": "BaseBdev2", 00:24:40.174 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:40.174 "is_configured": true, 00:24:40.174 "data_offset": 256, 00:24:40.174 "data_size": 7936 00:24:40.174 } 00:24:40.174 ] 00:24:40.174 }' 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.174 [2024-11-20 05:35:11.813423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.174 [2024-11-20 05:35:11.824059] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:40.174 [2024-11-20 05:35:11.824105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.174 [2024-11-20 05:35:11.824116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.174 [2024-11-20 05:35:11.824124] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.174 "name": "raid_bdev1", 00:24:40.174 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:40.174 "strip_size_kb": 0, 00:24:40.174 "state": "online", 00:24:40.174 "raid_level": "raid1", 00:24:40.174 "superblock": true, 00:24:40.174 "num_base_bdevs": 2, 00:24:40.174 "num_base_bdevs_discovered": 1, 00:24:40.174 "num_base_bdevs_operational": 1, 00:24:40.174 "base_bdevs_list": [ 00:24:40.174 { 00:24:40.174 "name": null, 00:24:40.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.174 "is_configured": false, 00:24:40.174 "data_offset": 0, 00:24:40.174 "data_size": 7936 00:24:40.174 }, 00:24:40.174 { 00:24:40.174 "name": "BaseBdev2", 00:24:40.174 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:40.174 "is_configured": true, 00:24:40.174 "data_offset": 256, 00:24:40.174 "data_size": 7936 00:24:40.174 } 00:24:40.174 ] 00:24:40.174 }' 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.174 05:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.449 05:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:40.449 05:35:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.449 05:35:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.449 [2024-11-20 05:35:12.174688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:40.449 [2024-11-20 05:35:12.174747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.449 [2024-11-20 05:35:12.174763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:40.449 [2024-11-20 05:35:12.174772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.449 [2024-11-20 05:35:12.175134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.449 [2024-11-20 05:35:12.175154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:40.449 [2024-11-20 05:35:12.175223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:40.449 [2024-11-20 05:35:12.175235] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:40.449 [2024-11-20 05:35:12.175243] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:40.449 [2024-11-20 05:35:12.175262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.449 [2024-11-20 05:35:12.183989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:24:40.449 spare 00:24:40.449 05:35:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.449 05:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:40.449 [2024-11-20 05:35:12.185555] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.382 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:41.640 "name": "raid_bdev1", 00:24:41.640 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:41.640 "strip_size_kb": 0, 00:24:41.640 "state": "online", 00:24:41.640 "raid_level": "raid1", 00:24:41.640 "superblock": true, 00:24:41.640 "num_base_bdevs": 2, 00:24:41.640 "num_base_bdevs_discovered": 2, 00:24:41.640 "num_base_bdevs_operational": 2, 00:24:41.640 "process": { 00:24:41.640 "type": "rebuild", 00:24:41.640 "target": "spare", 00:24:41.640 "progress": { 00:24:41.640 "blocks": 2560, 00:24:41.640 "percent": 32 00:24:41.640 } 00:24:41.640 }, 00:24:41.640 "base_bdevs_list": [ 00:24:41.640 { 00:24:41.640 "name": "spare", 00:24:41.640 "uuid": "ac28ceb0-3f96-5759-ad00-975c25662db2", 00:24:41.640 "is_configured": true, 00:24:41.640 "data_offset": 256, 00:24:41.640 "data_size": 7936 00:24:41.640 }, 00:24:41.640 { 00:24:41.640 "name": "BaseBdev2", 00:24:41.640 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:41.640 "is_configured": true, 00:24:41.640 "data_offset": 256, 00:24:41.640 "data_size": 7936 00:24:41.640 } 00:24:41.640 ] 00:24:41.640 }' 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.640 [2024-11-20 05:35:13.291748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:41.640 [2024-11-20 05:35:13.390879] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:41.640 [2024-11-20 05:35:13.390944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.640 [2024-11-20 05:35:13.390959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:41.640 [2024-11-20 05:35:13.390965] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.640 "name": "raid_bdev1", 00:24:41.640 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:41.640 "strip_size_kb": 0, 00:24:41.640 "state": "online", 00:24:41.640 "raid_level": "raid1", 00:24:41.640 "superblock": true, 00:24:41.640 "num_base_bdevs": 2, 00:24:41.640 "num_base_bdevs_discovered": 1, 00:24:41.640 "num_base_bdevs_operational": 1, 00:24:41.640 "base_bdevs_list": [ 00:24:41.640 { 00:24:41.640 "name": null, 00:24:41.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.640 "is_configured": false, 00:24:41.640 "data_offset": 0, 00:24:41.640 "data_size": 7936 00:24:41.640 }, 00:24:41.640 { 00:24:41.640 "name": "BaseBdev2", 00:24:41.640 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:41.640 "is_configured": true, 00:24:41.640 "data_offset": 256, 00:24:41.640 "data_size": 7936 00:24:41.640 } 00:24:41.640 ] 00:24:41.640 }' 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.640 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.900 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:41.900 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.900 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:41.900 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:41.900 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.900 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.900 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.900 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.900 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:42.159 "name": "raid_bdev1", 00:24:42.159 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:42.159 "strip_size_kb": 0, 00:24:42.159 "state": "online", 00:24:42.159 "raid_level": "raid1", 00:24:42.159 "superblock": true, 00:24:42.159 "num_base_bdevs": 2, 00:24:42.159 "num_base_bdevs_discovered": 1, 00:24:42.159 "num_base_bdevs_operational": 1, 00:24:42.159 "base_bdevs_list": [ 00:24:42.159 { 00:24:42.159 "name": null, 00:24:42.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.159 "is_configured": false, 00:24:42.159 "data_offset": 0, 00:24:42.159 "data_size": 7936 00:24:42.159 }, 00:24:42.159 { 00:24:42.159 "name": "BaseBdev2", 00:24:42.159 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:42.159 "is_configured": true, 00:24:42.159 "data_offset": 256, 00:24:42.159 "data_size": 7936 00:24:42.159 } 00:24:42.159 ] 00:24:42.159 }' 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:42.159 [2024-11-20 05:35:13.845630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:42.159 [2024-11-20 05:35:13.845692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:42.159 [2024-11-20 05:35:13.845709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:42.159 [2024-11-20 05:35:13.845717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:42.159 [2024-11-20 05:35:13.846100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:42.159 [2024-11-20 05:35:13.846111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:42.159 [2024-11-20 05:35:13.846177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:42.159 [2024-11-20 05:35:13.846187] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:42.159 [2024-11-20 05:35:13.846195] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:42.159 [2024-11-20 05:35:13.846203] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:42.159 BaseBdev1 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.159 05:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.101 "name": "raid_bdev1", 00:24:43.101 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:43.101 "strip_size_kb": 0, 00:24:43.101 "state": "online", 00:24:43.101 "raid_level": "raid1", 00:24:43.101 "superblock": true, 00:24:43.101 "num_base_bdevs": 2, 00:24:43.101 "num_base_bdevs_discovered": 1, 00:24:43.101 "num_base_bdevs_operational": 1, 00:24:43.101 "base_bdevs_list": [ 00:24:43.101 { 00:24:43.101 "name": null, 00:24:43.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.101 "is_configured": false, 00:24:43.101 "data_offset": 0, 00:24:43.101 "data_size": 7936 00:24:43.101 }, 00:24:43.101 { 00:24:43.101 "name": "BaseBdev2", 00:24:43.101 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:43.101 "is_configured": true, 00:24:43.101 "data_offset": 256, 00:24:43.101 "data_size": 7936 00:24:43.101 } 00:24:43.101 ] 00:24:43.101 }' 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.101 05:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.363 "name": "raid_bdev1", 00:24:43.363 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:43.363 "strip_size_kb": 0, 00:24:43.363 "state": "online", 00:24:43.363 "raid_level": "raid1", 00:24:43.363 "superblock": true, 00:24:43.363 "num_base_bdevs": 2, 00:24:43.363 "num_base_bdevs_discovered": 1, 00:24:43.363 "num_base_bdevs_operational": 1, 00:24:43.363 "base_bdevs_list": [ 00:24:43.363 { 00:24:43.363 "name": null, 00:24:43.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.363 "is_configured": false, 00:24:43.363 "data_offset": 0, 00:24:43.363 "data_size": 7936 00:24:43.363 }, 00:24:43.363 { 00:24:43.363 "name": "BaseBdev2", 00:24:43.363 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:43.363 "is_configured": true, 00:24:43.363 "data_offset": 256, 00:24:43.363 "data_size": 7936 00:24:43.363 } 00:24:43.363 ] 00:24:43.363 }' 00:24:43.363 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.623 [2024-11-20 05:35:15.261942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.623 [2024-11-20 05:35:15.262082] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:43.623 [2024-11-20 05:35:15.262095] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:43.623 request: 00:24:43.623 { 00:24:43.623 "base_bdev": "BaseBdev1", 00:24:43.623 "raid_bdev": "raid_bdev1", 00:24:43.623 "method": "bdev_raid_add_base_bdev", 00:24:43.623 "req_id": 1 00:24:43.623 } 00:24:43.623 Got JSON-RPC error response 00:24:43.623 response: 00:24:43.623 { 00:24:43.623 "code": -22, 00:24:43.623 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:43.623 } 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:43.623 05:35:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.594 "name": "raid_bdev1", 00:24:44.594 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:44.594 "strip_size_kb": 0, 00:24:44.594 "state": "online", 00:24:44.594 "raid_level": "raid1", 00:24:44.594 "superblock": true, 00:24:44.594 "num_base_bdevs": 2, 00:24:44.594 "num_base_bdevs_discovered": 1, 00:24:44.594 "num_base_bdevs_operational": 1, 00:24:44.594 "base_bdevs_list": [ 00:24:44.594 { 00:24:44.594 "name": null, 00:24:44.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.594 "is_configured": false, 00:24:44.594 "data_offset": 0, 00:24:44.594 "data_size": 7936 00:24:44.594 }, 00:24:44.594 { 00:24:44.594 "name": "BaseBdev2", 00:24:44.594 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:44.594 "is_configured": true, 00:24:44.594 "data_offset": 256, 00:24:44.594 "data_size": 7936 00:24:44.594 } 00:24:44.594 ] 00:24:44.594 }' 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.594 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:44.854 "name": "raid_bdev1", 00:24:44.854 "uuid": "18f4a64b-0afc-4d62-9020-64a093fc35f7", 00:24:44.854 "strip_size_kb": 0, 00:24:44.854 "state": "online", 00:24:44.854 "raid_level": "raid1", 00:24:44.854 "superblock": true, 00:24:44.854 "num_base_bdevs": 2, 00:24:44.854 "num_base_bdevs_discovered": 1, 00:24:44.854 "num_base_bdevs_operational": 1, 00:24:44.854 "base_bdevs_list": [ 00:24:44.854 { 00:24:44.854 "name": null, 00:24:44.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.854 "is_configured": false, 00:24:44.854 "data_offset": 0, 00:24:44.854 "data_size": 7936 00:24:44.854 }, 00:24:44.854 { 00:24:44.854 "name": "BaseBdev2", 00:24:44.854 "uuid": "e7e8cf16-7e0e-5ee3-8761-1e80357825c1", 00:24:44.854 "is_configured": true, 00:24:44.854 "data_offset": 256, 00:24:44.854 "data_size": 7936 00:24:44.854 } 00:24:44.854 ] 00:24:44.854 }' 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:44.854 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 84077 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 84077 ']' 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 84077 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84077 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:45.114 killing process with pid 84077 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84077' 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 84077 00:24:45.114 Received shutdown signal, test time was about 60.000000 seconds 00:24:45.114 00:24:45.114 Latency(us) 00:24:45.114 [2024-11-20T05:35:16.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.114 [2024-11-20T05:35:16.949Z] =================================================================================================================== 00:24:45.114 [2024-11-20T05:35:16.949Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:45.114 [2024-11-20 05:35:16.720438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:45.114 05:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 84077 00:24:45.114 [2024-11-20 05:35:16.720541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:45.114 [2024-11-20 05:35:16.720580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:45.114 [2024-11-20 05:35:16.720590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:45.114 [2024-11-20 05:35:16.866089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:45.686 05:35:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:24:45.686 00:24:45.686 real 0m17.059s 00:24:45.686 user 0m21.511s 00:24:45.686 sys 0m2.072s 00:24:45.686 05:35:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:45.686 05:35:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.686 ************************************ 00:24:45.686 END TEST raid_rebuild_test_sb_4k 00:24:45.686 ************************************ 00:24:45.686 05:35:17 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:24:45.686 05:35:17 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:24:45.686 05:35:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:45.686 05:35:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:45.686 05:35:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:45.686 ************************************ 00:24:45.686 START TEST raid_state_function_test_sb_md_separate 00:24:45.686 ************************************ 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=84741 00:24:45.686 Process raid pid: 84741 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84741' 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 84741 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 84741 ']' 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:45.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:45.686 05:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:45.946 [2024-11-20 05:35:17.543428] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:45.946 [2024-11-20 05:35:17.543550] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.946 [2024-11-20 05:35:17.699922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.206 [2024-11-20 05:35:17.784405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.206 [2024-11-20 05:35:17.895758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:46.206 [2024-11-20 05:35:17.895794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:46.777 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:46.777 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:24:46.777 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:46.777 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.777 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:46.778 [2024-11-20 05:35:18.394700] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:46.778 [2024-11-20 05:35:18.394750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:46.778 [2024-11-20 05:35:18.394758] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:46.778 [2024-11-20 05:35:18.394766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:46.778 "name": "Existed_Raid", 00:24:46.778 "uuid": "c0db3cc3-80a3-4e5b-9bcd-66c5fe1809fa", 00:24:46.778 "strip_size_kb": 0, 00:24:46.778 "state": "configuring", 00:24:46.778 "raid_level": "raid1", 00:24:46.778 "superblock": true, 00:24:46.778 "num_base_bdevs": 2, 00:24:46.778 "num_base_bdevs_discovered": 0, 00:24:46.778 "num_base_bdevs_operational": 2, 00:24:46.778 "base_bdevs_list": [ 00:24:46.778 { 00:24:46.778 "name": "BaseBdev1", 00:24:46.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.778 "is_configured": false, 00:24:46.778 "data_offset": 0, 00:24:46.778 "data_size": 0 00:24:46.778 }, 00:24:46.778 { 00:24:46.778 "name": "BaseBdev2", 00:24:46.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.778 "is_configured": false, 00:24:46.778 "data_offset": 0, 00:24:46.778 "data_size": 0 00:24:46.778 } 00:24:46.778 ] 00:24:46.778 }' 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:46.778 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.038 [2024-11-20 05:35:18.718716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:47.038 [2024-11-20 05:35:18.718757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.038 [2024-11-20 05:35:18.726707] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:47.038 [2024-11-20 05:35:18.726743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:47.038 [2024-11-20 05:35:18.726750] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:47.038 [2024-11-20 05:35:18.726760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.038 [2024-11-20 05:35:18.755109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:47.038 BaseBdev1 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.038 [ 00:24:47.038 { 00:24:47.038 "name": "BaseBdev1", 00:24:47.038 "aliases": [ 00:24:47.038 "8b7d118f-d563-4786-8315-fb00d2b4df7b" 00:24:47.038 ], 00:24:47.038 "product_name": "Malloc disk", 00:24:47.038 "block_size": 4096, 00:24:47.038 "num_blocks": 8192, 00:24:47.038 "uuid": "8b7d118f-d563-4786-8315-fb00d2b4df7b", 00:24:47.038 "md_size": 32, 00:24:47.038 "md_interleave": false, 00:24:47.038 "dif_type": 0, 00:24:47.038 "assigned_rate_limits": { 00:24:47.038 "rw_ios_per_sec": 0, 00:24:47.038 "rw_mbytes_per_sec": 0, 00:24:47.038 "r_mbytes_per_sec": 0, 00:24:47.038 "w_mbytes_per_sec": 0 00:24:47.038 }, 00:24:47.038 "claimed": true, 00:24:47.038 "claim_type": "exclusive_write", 00:24:47.038 "zoned": false, 00:24:47.038 "supported_io_types": { 00:24:47.038 "read": true, 00:24:47.038 "write": true, 00:24:47.038 "unmap": true, 00:24:47.038 "flush": true, 00:24:47.038 "reset": true, 00:24:47.038 "nvme_admin": false, 00:24:47.038 "nvme_io": false, 00:24:47.038 "nvme_io_md": false, 00:24:47.038 "write_zeroes": true, 00:24:47.038 "zcopy": true, 00:24:47.038 "get_zone_info": false, 00:24:47.038 "zone_management": false, 00:24:47.038 "zone_append": false, 00:24:47.038 "compare": false, 00:24:47.038 "compare_and_write": false, 00:24:47.038 "abort": true, 00:24:47.038 "seek_hole": false, 00:24:47.038 "seek_data": false, 00:24:47.038 "copy": true, 00:24:47.038 "nvme_iov_md": false 00:24:47.038 }, 00:24:47.038 "memory_domains": [ 00:24:47.038 { 00:24:47.038 "dma_device_id": "system", 00:24:47.038 "dma_device_type": 1 00:24:47.038 }, 00:24:47.038 { 00:24:47.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.038 "dma_device_type": 2 00:24:47.038 } 00:24:47.038 ], 00:24:47.038 "driver_specific": {} 00:24:47.038 } 00:24:47.038 ] 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.038 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.038 "name": "Existed_Raid", 00:24:47.038 "uuid": "b22aa9eb-4d0b-4411-9165-6364068166e4", 00:24:47.038 "strip_size_kb": 0, 00:24:47.038 "state": "configuring", 00:24:47.038 "raid_level": "raid1", 00:24:47.038 "superblock": true, 00:24:47.039 "num_base_bdevs": 2, 00:24:47.039 "num_base_bdevs_discovered": 1, 00:24:47.039 "num_base_bdevs_operational": 2, 00:24:47.039 "base_bdevs_list": [ 00:24:47.039 { 00:24:47.039 "name": "BaseBdev1", 00:24:47.039 "uuid": "8b7d118f-d563-4786-8315-fb00d2b4df7b", 00:24:47.039 "is_configured": true, 00:24:47.039 "data_offset": 256, 00:24:47.039 "data_size": 7936 00:24:47.039 }, 00:24:47.039 { 00:24:47.039 "name": "BaseBdev2", 00:24:47.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.039 "is_configured": false, 00:24:47.039 "data_offset": 0, 00:24:47.039 "data_size": 0 00:24:47.039 } 00:24:47.039 ] 00:24:47.039 }' 00:24:47.039 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.039 05:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.300 [2024-11-20 05:35:19.087234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:47.300 [2024-11-20 05:35:19.087284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.300 [2024-11-20 05:35:19.095258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:47.300 [2024-11-20 05:35:19.096811] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:47.300 [2024-11-20 05:35:19.096846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.300 "name": "Existed_Raid", 00:24:47.300 "uuid": "ca02021f-2d6b-4242-845a-13b2fdf0efcf", 00:24:47.300 "strip_size_kb": 0, 00:24:47.300 "state": "configuring", 00:24:47.300 "raid_level": "raid1", 00:24:47.300 "superblock": true, 00:24:47.300 "num_base_bdevs": 2, 00:24:47.300 "num_base_bdevs_discovered": 1, 00:24:47.300 "num_base_bdevs_operational": 2, 00:24:47.300 "base_bdevs_list": [ 00:24:47.300 { 00:24:47.300 "name": "BaseBdev1", 00:24:47.300 "uuid": "8b7d118f-d563-4786-8315-fb00d2b4df7b", 00:24:47.300 "is_configured": true, 00:24:47.300 "data_offset": 256, 00:24:47.300 "data_size": 7936 00:24:47.300 }, 00:24:47.300 { 00:24:47.300 "name": "BaseBdev2", 00:24:47.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.300 "is_configured": false, 00:24:47.300 "data_offset": 0, 00:24:47.300 "data_size": 0 00:24:47.300 } 00:24:47.300 ] 00:24:47.300 }' 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.300 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.869 [2024-11-20 05:35:19.433931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:47.869 [2024-11-20 05:35:19.434107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:47.869 [2024-11-20 05:35:19.434119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:47.869 [2024-11-20 05:35:19.434184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:47.869 [2024-11-20 05:35:19.434273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:47.869 [2024-11-20 05:35:19.434314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:47.869 [2024-11-20 05:35:19.434389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.869 BaseBdev2 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:47.869 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.870 [ 00:24:47.870 { 00:24:47.870 "name": "BaseBdev2", 00:24:47.870 "aliases": [ 00:24:47.870 "b1916cae-ba59-49ff-ada6-00b84d26172a" 00:24:47.870 ], 00:24:47.870 "product_name": "Malloc disk", 00:24:47.870 "block_size": 4096, 00:24:47.870 "num_blocks": 8192, 00:24:47.870 "uuid": "b1916cae-ba59-49ff-ada6-00b84d26172a", 00:24:47.870 "md_size": 32, 00:24:47.870 "md_interleave": false, 00:24:47.870 "dif_type": 0, 00:24:47.870 "assigned_rate_limits": { 00:24:47.870 "rw_ios_per_sec": 0, 00:24:47.870 "rw_mbytes_per_sec": 0, 00:24:47.870 "r_mbytes_per_sec": 0, 00:24:47.870 "w_mbytes_per_sec": 0 00:24:47.870 }, 00:24:47.870 "claimed": true, 00:24:47.870 "claim_type": "exclusive_write", 00:24:47.870 "zoned": false, 00:24:47.870 "supported_io_types": { 00:24:47.870 "read": true, 00:24:47.870 "write": true, 00:24:47.870 "unmap": true, 00:24:47.870 "flush": true, 00:24:47.870 "reset": true, 00:24:47.870 "nvme_admin": false, 00:24:47.870 "nvme_io": false, 00:24:47.870 "nvme_io_md": false, 00:24:47.870 "write_zeroes": true, 00:24:47.870 "zcopy": true, 00:24:47.870 "get_zone_info": false, 00:24:47.870 "zone_management": false, 00:24:47.870 "zone_append": false, 00:24:47.870 "compare": false, 00:24:47.870 "compare_and_write": false, 00:24:47.870 "abort": true, 00:24:47.870 "seek_hole": false, 00:24:47.870 "seek_data": false, 00:24:47.870 "copy": true, 00:24:47.870 "nvme_iov_md": false 00:24:47.870 }, 00:24:47.870 "memory_domains": [ 00:24:47.870 { 00:24:47.870 "dma_device_id": "system", 00:24:47.870 "dma_device_type": 1 00:24:47.870 }, 00:24:47.870 { 00:24:47.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.870 "dma_device_type": 2 00:24:47.870 } 00:24:47.870 ], 00:24:47.870 "driver_specific": {} 00:24:47.870 } 00:24:47.870 ] 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.870 "name": "Existed_Raid", 00:24:47.870 "uuid": "ca02021f-2d6b-4242-845a-13b2fdf0efcf", 00:24:47.870 "strip_size_kb": 0, 00:24:47.870 "state": "online", 00:24:47.870 "raid_level": "raid1", 00:24:47.870 "superblock": true, 00:24:47.870 "num_base_bdevs": 2, 00:24:47.870 "num_base_bdevs_discovered": 2, 00:24:47.870 "num_base_bdevs_operational": 2, 00:24:47.870 "base_bdevs_list": [ 00:24:47.870 { 00:24:47.870 "name": "BaseBdev1", 00:24:47.870 "uuid": "8b7d118f-d563-4786-8315-fb00d2b4df7b", 00:24:47.870 "is_configured": true, 00:24:47.870 "data_offset": 256, 00:24:47.870 "data_size": 7936 00:24:47.870 }, 00:24:47.870 { 00:24:47.870 "name": "BaseBdev2", 00:24:47.870 "uuid": "b1916cae-ba59-49ff-ada6-00b84d26172a", 00:24:47.870 "is_configured": true, 00:24:47.870 "data_offset": 256, 00:24:47.870 "data_size": 7936 00:24:47.870 } 00:24:47.870 ] 00:24:47.870 }' 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.870 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.131 [2024-11-20 05:35:19.770297] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:48.131 "name": "Existed_Raid", 00:24:48.131 "aliases": [ 00:24:48.131 "ca02021f-2d6b-4242-845a-13b2fdf0efcf" 00:24:48.131 ], 00:24:48.131 "product_name": "Raid Volume", 00:24:48.131 "block_size": 4096, 00:24:48.131 "num_blocks": 7936, 00:24:48.131 "uuid": "ca02021f-2d6b-4242-845a-13b2fdf0efcf", 00:24:48.131 "md_size": 32, 00:24:48.131 "md_interleave": false, 00:24:48.131 "dif_type": 0, 00:24:48.131 "assigned_rate_limits": { 00:24:48.131 "rw_ios_per_sec": 0, 00:24:48.131 "rw_mbytes_per_sec": 0, 00:24:48.131 "r_mbytes_per_sec": 0, 00:24:48.131 "w_mbytes_per_sec": 0 00:24:48.131 }, 00:24:48.131 "claimed": false, 00:24:48.131 "zoned": false, 00:24:48.131 "supported_io_types": { 00:24:48.131 "read": true, 00:24:48.131 "write": true, 00:24:48.131 "unmap": false, 00:24:48.131 "flush": false, 00:24:48.131 "reset": true, 00:24:48.131 "nvme_admin": false, 00:24:48.131 "nvme_io": false, 00:24:48.131 "nvme_io_md": false, 00:24:48.131 "write_zeroes": true, 00:24:48.131 "zcopy": false, 00:24:48.131 "get_zone_info": false, 00:24:48.131 "zone_management": false, 00:24:48.131 "zone_append": false, 00:24:48.131 "compare": false, 00:24:48.131 "compare_and_write": false, 00:24:48.131 "abort": false, 00:24:48.131 "seek_hole": false, 00:24:48.131 "seek_data": false, 00:24:48.131 "copy": false, 00:24:48.131 "nvme_iov_md": false 00:24:48.131 }, 00:24:48.131 "memory_domains": [ 00:24:48.131 { 00:24:48.131 "dma_device_id": "system", 00:24:48.131 "dma_device_type": 1 00:24:48.131 }, 00:24:48.131 { 00:24:48.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.131 "dma_device_type": 2 00:24:48.131 }, 00:24:48.131 { 00:24:48.131 "dma_device_id": "system", 00:24:48.131 "dma_device_type": 1 00:24:48.131 }, 00:24:48.131 { 00:24:48.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.131 "dma_device_type": 2 00:24:48.131 } 00:24:48.131 ], 00:24:48.131 "driver_specific": { 00:24:48.131 "raid": { 00:24:48.131 "uuid": "ca02021f-2d6b-4242-845a-13b2fdf0efcf", 00:24:48.131 "strip_size_kb": 0, 00:24:48.131 "state": "online", 00:24:48.131 "raid_level": "raid1", 00:24:48.131 "superblock": true, 00:24:48.131 "num_base_bdevs": 2, 00:24:48.131 "num_base_bdevs_discovered": 2, 00:24:48.131 "num_base_bdevs_operational": 2, 00:24:48.131 "base_bdevs_list": [ 00:24:48.131 { 00:24:48.131 "name": "BaseBdev1", 00:24:48.131 "uuid": "8b7d118f-d563-4786-8315-fb00d2b4df7b", 00:24:48.131 "is_configured": true, 00:24:48.131 "data_offset": 256, 00:24:48.131 "data_size": 7936 00:24:48.131 }, 00:24:48.131 { 00:24:48.131 "name": "BaseBdev2", 00:24:48.131 "uuid": "b1916cae-ba59-49ff-ada6-00b84d26172a", 00:24:48.131 "is_configured": true, 00:24:48.131 "data_offset": 256, 00:24:48.131 "data_size": 7936 00:24:48.131 } 00:24:48.131 ] 00:24:48.131 } 00:24:48.131 } 00:24:48.131 }' 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:48.131 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:48.131 BaseBdev2' 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.132 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.132 [2024-11-20 05:35:19.922103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.393 05:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.393 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:48.393 "name": "Existed_Raid", 00:24:48.393 "uuid": "ca02021f-2d6b-4242-845a-13b2fdf0efcf", 00:24:48.393 "strip_size_kb": 0, 00:24:48.393 "state": "online", 00:24:48.393 "raid_level": "raid1", 00:24:48.393 "superblock": true, 00:24:48.393 "num_base_bdevs": 2, 00:24:48.393 "num_base_bdevs_discovered": 1, 00:24:48.393 "num_base_bdevs_operational": 1, 00:24:48.393 "base_bdevs_list": [ 00:24:48.393 { 00:24:48.393 "name": null, 00:24:48.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.393 "is_configured": false, 00:24:48.393 "data_offset": 0, 00:24:48.393 "data_size": 7936 00:24:48.393 }, 00:24:48.393 { 00:24:48.393 "name": "BaseBdev2", 00:24:48.393 "uuid": "b1916cae-ba59-49ff-ada6-00b84d26172a", 00:24:48.393 "is_configured": true, 00:24:48.393 "data_offset": 256, 00:24:48.393 "data_size": 7936 00:24:48.393 } 00:24:48.394 ] 00:24:48.394 }' 00:24:48.394 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:48.394 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.654 [2024-11-20 05:35:20.341070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:48.654 [2024-11-20 05:35:20.341256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:48.654 [2024-11-20 05:35:20.391845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:48.654 [2024-11-20 05:35:20.392012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:48.654 [2024-11-20 05:35:20.392028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.654 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 84741 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 84741 ']' 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 84741 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84741 00:24:48.655 killing process with pid 84741 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84741' 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 84741 00:24:48.655 [2024-11-20 05:35:20.454115] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:48.655 05:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 84741 00:24:48.655 [2024-11-20 05:35:20.462407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:49.224 05:35:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:24:49.224 00:24:49.224 real 0m3.557s 00:24:49.224 user 0m5.173s 00:24:49.224 sys 0m0.622s 00:24:49.224 ************************************ 00:24:49.224 END TEST raid_state_function_test_sb_md_separate 00:24:49.224 ************************************ 00:24:49.224 05:35:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:49.224 05:35:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:49.483 05:35:21 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:24:49.483 05:35:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:49.483 05:35:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:49.483 05:35:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:49.483 ************************************ 00:24:49.483 START TEST raid_superblock_test_md_separate 00:24:49.483 ************************************ 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:49.483 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=84977 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 84977 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 84977 ']' 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:49.484 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:49.484 [2024-11-20 05:35:21.141978] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:49.484 [2024-11-20 05:35:21.142100] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84977 ] 00:24:49.484 [2024-11-20 05:35:21.301094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.742 [2024-11-20 05:35:21.400111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.742 [2024-11-20 05:35:21.535930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.742 [2024-11-20 05:35:21.535980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.311 05:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.311 malloc1 00:24:50.311 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.311 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:50.311 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.311 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.311 [2024-11-20 05:35:22.012810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:50.311 [2024-11-20 05:35:22.012866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.311 [2024-11-20 05:35:22.012888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:50.311 [2024-11-20 05:35:22.012898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.312 [2024-11-20 05:35:22.014826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.312 [2024-11-20 05:35:22.014966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:50.312 pt1 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.312 malloc2 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.312 [2024-11-20 05:35:22.049328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:50.312 [2024-11-20 05:35:22.049404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.312 [2024-11-20 05:35:22.049428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:50.312 [2024-11-20 05:35:22.049436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.312 [2024-11-20 05:35:22.051404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.312 [2024-11-20 05:35:22.051434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:50.312 pt2 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.312 [2024-11-20 05:35:22.057358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:50.312 [2024-11-20 05:35:22.059231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:50.312 [2024-11-20 05:35:22.059437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:50.312 [2024-11-20 05:35:22.059456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:50.312 [2024-11-20 05:35:22.059566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:50.312 [2024-11-20 05:35:22.059707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:50.312 [2024-11-20 05:35:22.059725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:50.312 [2024-11-20 05:35:22.059827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.312 "name": "raid_bdev1", 00:24:50.312 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:50.312 "strip_size_kb": 0, 00:24:50.312 "state": "online", 00:24:50.312 "raid_level": "raid1", 00:24:50.312 "superblock": true, 00:24:50.312 "num_base_bdevs": 2, 00:24:50.312 "num_base_bdevs_discovered": 2, 00:24:50.312 "num_base_bdevs_operational": 2, 00:24:50.312 "base_bdevs_list": [ 00:24:50.312 { 00:24:50.312 "name": "pt1", 00:24:50.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:50.312 "is_configured": true, 00:24:50.312 "data_offset": 256, 00:24:50.312 "data_size": 7936 00:24:50.312 }, 00:24:50.312 { 00:24:50.312 "name": "pt2", 00:24:50.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:50.312 "is_configured": true, 00:24:50.312 "data_offset": 256, 00:24:50.312 "data_size": 7936 00:24:50.312 } 00:24:50.312 ] 00:24:50.312 }' 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.312 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:50.573 [2024-11-20 05:35:22.381744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:50.573 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.834 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:50.834 "name": "raid_bdev1", 00:24:50.834 "aliases": [ 00:24:50.834 "62d05134-0abc-4c7f-afca-af7f37018c35" 00:24:50.834 ], 00:24:50.834 "product_name": "Raid Volume", 00:24:50.834 "block_size": 4096, 00:24:50.834 "num_blocks": 7936, 00:24:50.834 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:50.834 "md_size": 32, 00:24:50.834 "md_interleave": false, 00:24:50.834 "dif_type": 0, 00:24:50.834 "assigned_rate_limits": { 00:24:50.834 "rw_ios_per_sec": 0, 00:24:50.834 "rw_mbytes_per_sec": 0, 00:24:50.834 "r_mbytes_per_sec": 0, 00:24:50.834 "w_mbytes_per_sec": 0 00:24:50.834 }, 00:24:50.834 "claimed": false, 00:24:50.834 "zoned": false, 00:24:50.834 "supported_io_types": { 00:24:50.834 "read": true, 00:24:50.834 "write": true, 00:24:50.834 "unmap": false, 00:24:50.834 "flush": false, 00:24:50.834 "reset": true, 00:24:50.834 "nvme_admin": false, 00:24:50.835 "nvme_io": false, 00:24:50.835 "nvme_io_md": false, 00:24:50.835 "write_zeroes": true, 00:24:50.835 "zcopy": false, 00:24:50.835 "get_zone_info": false, 00:24:50.835 "zone_management": false, 00:24:50.835 "zone_append": false, 00:24:50.835 "compare": false, 00:24:50.835 "compare_and_write": false, 00:24:50.835 "abort": false, 00:24:50.835 "seek_hole": false, 00:24:50.835 "seek_data": false, 00:24:50.835 "copy": false, 00:24:50.835 "nvme_iov_md": false 00:24:50.835 }, 00:24:50.835 "memory_domains": [ 00:24:50.835 { 00:24:50.835 "dma_device_id": "system", 00:24:50.835 "dma_device_type": 1 00:24:50.835 }, 00:24:50.835 { 00:24:50.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.835 "dma_device_type": 2 00:24:50.835 }, 00:24:50.835 { 00:24:50.835 "dma_device_id": "system", 00:24:50.835 "dma_device_type": 1 00:24:50.835 }, 00:24:50.835 { 00:24:50.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.835 "dma_device_type": 2 00:24:50.835 } 00:24:50.835 ], 00:24:50.835 "driver_specific": { 00:24:50.835 "raid": { 00:24:50.835 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:50.835 "strip_size_kb": 0, 00:24:50.835 "state": "online", 00:24:50.835 "raid_level": "raid1", 00:24:50.835 "superblock": true, 00:24:50.835 "num_base_bdevs": 2, 00:24:50.835 "num_base_bdevs_discovered": 2, 00:24:50.835 "num_base_bdevs_operational": 2, 00:24:50.835 "base_bdevs_list": [ 00:24:50.835 { 00:24:50.835 "name": "pt1", 00:24:50.835 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:50.835 "is_configured": true, 00:24:50.835 "data_offset": 256, 00:24:50.835 "data_size": 7936 00:24:50.835 }, 00:24:50.835 { 00:24:50.835 "name": "pt2", 00:24:50.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:50.835 "is_configured": true, 00:24:50.835 "data_offset": 256, 00:24:50.835 "data_size": 7936 00:24:50.835 } 00:24:50.835 ] 00:24:50.835 } 00:24:50.835 } 00:24:50.835 }' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:50.835 pt2' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.835 [2024-11-20 05:35:22.557773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=62d05134-0abc-4c7f-afca-af7f37018c35 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 62d05134-0abc-4c7f-afca-af7f37018c35 ']' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.835 [2024-11-20 05:35:22.589464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:50.835 [2024-11-20 05:35:22.589573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:50.835 [2024-11-20 05:35:22.589734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:50.835 [2024-11-20 05:35:22.589878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:50.835 [2024-11-20 05:35:22.590021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.835 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.097 [2024-11-20 05:35:22.681502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:51.097 [2024-11-20 05:35:22.683390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:51.097 [2024-11-20 05:35:22.683461] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:51.097 [2024-11-20 05:35:22.683512] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:51.097 [2024-11-20 05:35:22.683527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:51.097 [2024-11-20 05:35:22.683538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:51.097 request: 00:24:51.097 { 00:24:51.097 "name": "raid_bdev1", 00:24:51.097 "raid_level": "raid1", 00:24:51.097 "base_bdevs": [ 00:24:51.097 "malloc1", 00:24:51.097 "malloc2" 00:24:51.097 ], 00:24:51.097 "superblock": false, 00:24:51.097 "method": "bdev_raid_create", 00:24:51.097 "req_id": 1 00:24:51.097 } 00:24:51.097 Got JSON-RPC error response 00:24:51.097 response: 00:24:51.097 { 00:24:51.097 "code": -17, 00:24:51.097 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:51.097 } 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:51.097 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.098 [2024-11-20 05:35:22.725512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:51.098 [2024-11-20 05:35:22.725569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.098 [2024-11-20 05:35:22.725586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:51.098 [2024-11-20 05:35:22.725597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.098 [2024-11-20 05:35:22.727588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.098 [2024-11-20 05:35:22.727623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:51.098 [2024-11-20 05:35:22.727672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:51.098 [2024-11-20 05:35:22.727727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:51.098 pt1 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.098 "name": "raid_bdev1", 00:24:51.098 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:51.098 "strip_size_kb": 0, 00:24:51.098 "state": "configuring", 00:24:51.098 "raid_level": "raid1", 00:24:51.098 "superblock": true, 00:24:51.098 "num_base_bdevs": 2, 00:24:51.098 "num_base_bdevs_discovered": 1, 00:24:51.098 "num_base_bdevs_operational": 2, 00:24:51.098 "base_bdevs_list": [ 00:24:51.098 { 00:24:51.098 "name": "pt1", 00:24:51.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:51.098 "is_configured": true, 00:24:51.098 "data_offset": 256, 00:24:51.098 "data_size": 7936 00:24:51.098 }, 00:24:51.098 { 00:24:51.098 "name": null, 00:24:51.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:51.098 "is_configured": false, 00:24:51.098 "data_offset": 256, 00:24:51.098 "data_size": 7936 00:24:51.098 } 00:24:51.098 ] 00:24:51.098 }' 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.098 05:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.359 [2024-11-20 05:35:23.077600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:51.359 [2024-11-20 05:35:23.077666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.359 [2024-11-20 05:35:23.077684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:51.359 [2024-11-20 05:35:23.077694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.359 [2024-11-20 05:35:23.077895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.359 [2024-11-20 05:35:23.077912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:51.359 [2024-11-20 05:35:23.077959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:51.359 [2024-11-20 05:35:23.077979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:51.359 [2024-11-20 05:35:23.078082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:51.359 [2024-11-20 05:35:23.078093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:51.359 [2024-11-20 05:35:23.078156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:51.359 [2024-11-20 05:35:23.078253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:51.359 [2024-11-20 05:35:23.078261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:51.359 [2024-11-20 05:35:23.078349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.359 pt2 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.359 "name": "raid_bdev1", 00:24:51.359 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:51.359 "strip_size_kb": 0, 00:24:51.359 "state": "online", 00:24:51.359 "raid_level": "raid1", 00:24:51.359 "superblock": true, 00:24:51.359 "num_base_bdevs": 2, 00:24:51.359 "num_base_bdevs_discovered": 2, 00:24:51.359 "num_base_bdevs_operational": 2, 00:24:51.359 "base_bdevs_list": [ 00:24:51.359 { 00:24:51.359 "name": "pt1", 00:24:51.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:51.359 "is_configured": true, 00:24:51.359 "data_offset": 256, 00:24:51.359 "data_size": 7936 00:24:51.359 }, 00:24:51.359 { 00:24:51.359 "name": "pt2", 00:24:51.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:51.359 "is_configured": true, 00:24:51.359 "data_offset": 256, 00:24:51.359 "data_size": 7936 00:24:51.359 } 00:24:51.359 ] 00:24:51.359 }' 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.359 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.620 [2024-11-20 05:35:23.401985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.620 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.620 "name": "raid_bdev1", 00:24:51.620 "aliases": [ 00:24:51.620 "62d05134-0abc-4c7f-afca-af7f37018c35" 00:24:51.620 ], 00:24:51.620 "product_name": "Raid Volume", 00:24:51.620 "block_size": 4096, 00:24:51.620 "num_blocks": 7936, 00:24:51.620 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:51.620 "md_size": 32, 00:24:51.620 "md_interleave": false, 00:24:51.620 "dif_type": 0, 00:24:51.620 "assigned_rate_limits": { 00:24:51.620 "rw_ios_per_sec": 0, 00:24:51.620 "rw_mbytes_per_sec": 0, 00:24:51.620 "r_mbytes_per_sec": 0, 00:24:51.620 "w_mbytes_per_sec": 0 00:24:51.620 }, 00:24:51.620 "claimed": false, 00:24:51.620 "zoned": false, 00:24:51.620 "supported_io_types": { 00:24:51.620 "read": true, 00:24:51.620 "write": true, 00:24:51.620 "unmap": false, 00:24:51.620 "flush": false, 00:24:51.620 "reset": true, 00:24:51.620 "nvme_admin": false, 00:24:51.620 "nvme_io": false, 00:24:51.620 "nvme_io_md": false, 00:24:51.620 "write_zeroes": true, 00:24:51.620 "zcopy": false, 00:24:51.620 "get_zone_info": false, 00:24:51.620 "zone_management": false, 00:24:51.620 "zone_append": false, 00:24:51.620 "compare": false, 00:24:51.620 "compare_and_write": false, 00:24:51.620 "abort": false, 00:24:51.620 "seek_hole": false, 00:24:51.620 "seek_data": false, 00:24:51.620 "copy": false, 00:24:51.620 "nvme_iov_md": false 00:24:51.620 }, 00:24:51.620 "memory_domains": [ 00:24:51.620 { 00:24:51.620 "dma_device_id": "system", 00:24:51.620 "dma_device_type": 1 00:24:51.620 }, 00:24:51.620 { 00:24:51.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.620 "dma_device_type": 2 00:24:51.620 }, 00:24:51.620 { 00:24:51.620 "dma_device_id": "system", 00:24:51.620 "dma_device_type": 1 00:24:51.620 }, 00:24:51.620 { 00:24:51.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.620 "dma_device_type": 2 00:24:51.620 } 00:24:51.620 ], 00:24:51.620 "driver_specific": { 00:24:51.620 "raid": { 00:24:51.620 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:51.620 "strip_size_kb": 0, 00:24:51.620 "state": "online", 00:24:51.620 "raid_level": "raid1", 00:24:51.621 "superblock": true, 00:24:51.621 "num_base_bdevs": 2, 00:24:51.621 "num_base_bdevs_discovered": 2, 00:24:51.621 "num_base_bdevs_operational": 2, 00:24:51.621 "base_bdevs_list": [ 00:24:51.621 { 00:24:51.621 "name": "pt1", 00:24:51.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:51.621 "is_configured": true, 00:24:51.621 "data_offset": 256, 00:24:51.621 "data_size": 7936 00:24:51.621 }, 00:24:51.621 { 00:24:51.621 "name": "pt2", 00:24:51.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:51.621 "is_configured": true, 00:24:51.621 "data_offset": 256, 00:24:51.621 "data_size": 7936 00:24:51.621 } 00:24:51.621 ] 00:24:51.621 } 00:24:51.621 } 00:24:51.621 }' 00:24:51.621 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:51.885 pt2' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:51.885 [2024-11-20 05:35:23.570039] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 62d05134-0abc-4c7f-afca-af7f37018c35 '!=' 62d05134-0abc-4c7f-afca-af7f37018c35 ']' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.885 [2024-11-20 05:35:23.597794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.885 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.885 "name": "raid_bdev1", 00:24:51.885 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:51.885 "strip_size_kb": 0, 00:24:51.885 "state": "online", 00:24:51.885 "raid_level": "raid1", 00:24:51.885 "superblock": true, 00:24:51.885 "num_base_bdevs": 2, 00:24:51.885 "num_base_bdevs_discovered": 1, 00:24:51.885 "num_base_bdevs_operational": 1, 00:24:51.885 "base_bdevs_list": [ 00:24:51.885 { 00:24:51.885 "name": null, 00:24:51.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.885 "is_configured": false, 00:24:51.885 "data_offset": 0, 00:24:51.885 "data_size": 7936 00:24:51.885 }, 00:24:51.885 { 00:24:51.885 "name": "pt2", 00:24:51.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:51.885 "is_configured": true, 00:24:51.885 "data_offset": 256, 00:24:51.885 "data_size": 7936 00:24:51.885 } 00:24:51.885 ] 00:24:51.886 }' 00:24:51.886 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.886 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.146 [2024-11-20 05:35:23.925814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:52.146 [2024-11-20 05:35:23.925930] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:52.146 [2024-11-20 05:35:23.926073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:52.146 [2024-11-20 05:35:23.926119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:52.146 [2024-11-20 05:35:23.926130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.146 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.146 [2024-11-20 05:35:23.973830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:52.146 [2024-11-20 05:35:23.973886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.146 [2024-11-20 05:35:23.973900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:52.146 [2024-11-20 05:35:23.973909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.146 [2024-11-20 05:35:23.975601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.146 [2024-11-20 05:35:23.975634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:52.146 [2024-11-20 05:35:23.975678] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:52.146 [2024-11-20 05:35:23.975713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:52.146 [2024-11-20 05:35:23.975785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:52.147 [2024-11-20 05:35:23.975795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:52.147 [2024-11-20 05:35:23.975853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:52.147 [2024-11-20 05:35:23.975932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:52.147 [2024-11-20 05:35:23.975939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:52.147 [2024-11-20 05:35:23.976012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.147 pt2 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.147 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.406 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.406 05:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.406 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.406 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.406 05:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.406 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.406 "name": "raid_bdev1", 00:24:52.406 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:52.406 "strip_size_kb": 0, 00:24:52.406 "state": "online", 00:24:52.406 "raid_level": "raid1", 00:24:52.406 "superblock": true, 00:24:52.406 "num_base_bdevs": 2, 00:24:52.406 "num_base_bdevs_discovered": 1, 00:24:52.406 "num_base_bdevs_operational": 1, 00:24:52.406 "base_bdevs_list": [ 00:24:52.406 { 00:24:52.406 "name": null, 00:24:52.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.406 "is_configured": false, 00:24:52.406 "data_offset": 256, 00:24:52.406 "data_size": 7936 00:24:52.406 }, 00:24:52.406 { 00:24:52.406 "name": "pt2", 00:24:52.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:52.406 "is_configured": true, 00:24:52.406 "data_offset": 256, 00:24:52.406 "data_size": 7936 00:24:52.406 } 00:24:52.406 ] 00:24:52.406 }' 00:24:52.406 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.406 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.664 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:52.664 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.664 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.664 [2024-11-20 05:35:24.289861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:52.664 [2024-11-20 05:35:24.289887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:52.664 [2024-11-20 05:35:24.289943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:52.664 [2024-11-20 05:35:24.289986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:52.664 [2024-11-20 05:35:24.289994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:52.664 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.664 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.664 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.664 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.665 [2024-11-20 05:35:24.329902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:52.665 [2024-11-20 05:35:24.329959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.665 [2024-11-20 05:35:24.329975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:52.665 [2024-11-20 05:35:24.329983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.665 [2024-11-20 05:35:24.331669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.665 [2024-11-20 05:35:24.331698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:52.665 [2024-11-20 05:35:24.331744] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:52.665 [2024-11-20 05:35:24.331777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:52.665 [2024-11-20 05:35:24.331875] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:52.665 [2024-11-20 05:35:24.331883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:52.665 [2024-11-20 05:35:24.331898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:52.665 [2024-11-20 05:35:24.331941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:52.665 [2024-11-20 05:35:24.331992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:52.665 [2024-11-20 05:35:24.332004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:52.665 [2024-11-20 05:35:24.332065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:52.665 [2024-11-20 05:35:24.332140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:52.665 [2024-11-20 05:35:24.332152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:52.665 [2024-11-20 05:35:24.332230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.665 pt1 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.665 "name": "raid_bdev1", 00:24:52.665 "uuid": "62d05134-0abc-4c7f-afca-af7f37018c35", 00:24:52.665 "strip_size_kb": 0, 00:24:52.665 "state": "online", 00:24:52.665 "raid_level": "raid1", 00:24:52.665 "superblock": true, 00:24:52.665 "num_base_bdevs": 2, 00:24:52.665 "num_base_bdevs_discovered": 1, 00:24:52.665 "num_base_bdevs_operational": 1, 00:24:52.665 "base_bdevs_list": [ 00:24:52.665 { 00:24:52.665 "name": null, 00:24:52.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.665 "is_configured": false, 00:24:52.665 "data_offset": 256, 00:24:52.665 "data_size": 7936 00:24:52.665 }, 00:24:52.665 { 00:24:52.665 "name": "pt2", 00:24:52.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:52.665 "is_configured": true, 00:24:52.665 "data_offset": 256, 00:24:52.665 "data_size": 7936 00:24:52.665 } 00:24:52.665 ] 00:24:52.665 }' 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.665 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.925 [2024-11-20 05:35:24.734184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.925 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 62d05134-0abc-4c7f-afca-af7f37018c35 '!=' 62d05134-0abc-4c7f-afca-af7f37018c35 ']' 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 84977 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 84977 ']' 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 84977 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84977 00:24:53.188 killing process with pid 84977 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84977' 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 84977 00:24:53.188 [2024-11-20 05:35:24.780830] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:53.188 [2024-11-20 05:35:24.780898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:53.188 05:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 84977 00:24:53.188 [2024-11-20 05:35:24.780937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:53.188 [2024-11-20 05:35:24.780951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:53.188 [2024-11-20 05:35:24.888997] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:53.762 05:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:24:53.762 00:24:53.762 real 0m4.383s 00:24:53.762 user 0m6.743s 00:24:53.762 sys 0m0.733s 00:24:53.762 05:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:53.762 05:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:53.762 ************************************ 00:24:53.762 END TEST raid_superblock_test_md_separate 00:24:53.762 ************************************ 00:24:53.762 05:35:25 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:24:53.762 05:35:25 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:24:53.762 05:35:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:53.762 05:35:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:53.762 05:35:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:53.762 ************************************ 00:24:53.762 START TEST raid_rebuild_test_sb_md_separate 00:24:53.762 ************************************ 00:24:53.762 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:53.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=85283 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 85283 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 85283 ']' 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:53.763 05:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:53.763 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:53.763 Zero copy mechanism will not be used. 00:24:53.763 [2024-11-20 05:35:25.572213] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:24:53.763 [2024-11-20 05:35:25.572332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85283 ] 00:24:54.022 [2024-11-20 05:35:25.727345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.022 [2024-11-20 05:35:25.811673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.282 [2024-11-20 05:35:25.921391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.282 [2024-11-20 05:35:25.921557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.854 BaseBdev1_malloc 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.854 [2024-11-20 05:35:26.447641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:54.854 [2024-11-20 05:35:26.447689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.854 [2024-11-20 05:35:26.447707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:54.854 [2024-11-20 05:35:26.447716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.854 [2024-11-20 05:35:26.449292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.854 [2024-11-20 05:35:26.449436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:54.854 BaseBdev1 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.854 BaseBdev2_malloc 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.854 [2024-11-20 05:35:26.479732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:54.854 [2024-11-20 05:35:26.479782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.854 [2024-11-20 05:35:26.479797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:54.854 [2024-11-20 05:35:26.479806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.854 [2024-11-20 05:35:26.481397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.854 [2024-11-20 05:35:26.481425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:54.854 BaseBdev2 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.854 spare_malloc 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.854 spare_delay 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.854 [2024-11-20 05:35:26.531839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:54.854 [2024-11-20 05:35:26.531885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.854 [2024-11-20 05:35:26.531902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:54.854 [2024-11-20 05:35:26.531911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.854 [2024-11-20 05:35:26.533526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.854 [2024-11-20 05:35:26.533656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:54.854 spare 00:24:54.854 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.855 [2024-11-20 05:35:26.539878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.855 [2024-11-20 05:35:26.541429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:54.855 [2024-11-20 05:35:26.541568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:54.855 [2024-11-20 05:35:26.541580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:54.855 [2024-11-20 05:35:26.541638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:54.855 [2024-11-20 05:35:26.541733] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:54.855 [2024-11-20 05:35:26.541741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:54.855 [2024-11-20 05:35:26.541816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.855 "name": "raid_bdev1", 00:24:54.855 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:24:54.855 "strip_size_kb": 0, 00:24:54.855 "state": "online", 00:24:54.855 "raid_level": "raid1", 00:24:54.855 "superblock": true, 00:24:54.855 "num_base_bdevs": 2, 00:24:54.855 "num_base_bdevs_discovered": 2, 00:24:54.855 "num_base_bdevs_operational": 2, 00:24:54.855 "base_bdevs_list": [ 00:24:54.855 { 00:24:54.855 "name": "BaseBdev1", 00:24:54.855 "uuid": "084666a0-0345-544d-97cb-c2b5a9dfd001", 00:24:54.855 "is_configured": true, 00:24:54.855 "data_offset": 256, 00:24:54.855 "data_size": 7936 00:24:54.855 }, 00:24:54.855 { 00:24:54.855 "name": "BaseBdev2", 00:24:54.855 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:24:54.855 "is_configured": true, 00:24:54.855 "data_offset": 256, 00:24:54.855 "data_size": 7936 00:24:54.855 } 00:24:54.855 ] 00:24:54.855 }' 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.855 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.117 [2024-11-20 05:35:26.868176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:55.117 05:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:55.378 [2024-11-20 05:35:27.116047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:55.378 /dev/nbd0 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:55.378 1+0 records in 00:24:55.378 1+0 records out 00:24:55.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490588 s, 8.3 MB/s 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:55.378 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:56.311 7936+0 records in 00:24:56.311 7936+0 records out 00:24:56.311 32505856 bytes (33 MB, 31 MiB) copied, 0.680228 s, 47.8 MB/s 00:24:56.311 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:56.311 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:56.311 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:56.311 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:56.311 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:24:56.311 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:56.311 05:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:56.311 [2024-11-20 05:35:28.067821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.311 [2024-11-20 05:35:28.075898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.311 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.311 "name": "raid_bdev1", 00:24:56.311 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:24:56.311 "strip_size_kb": 0, 00:24:56.311 "state": "online", 00:24:56.311 "raid_level": "raid1", 00:24:56.311 "superblock": true, 00:24:56.311 "num_base_bdevs": 2, 00:24:56.311 "num_base_bdevs_discovered": 1, 00:24:56.311 "num_base_bdevs_operational": 1, 00:24:56.311 "base_bdevs_list": [ 00:24:56.311 { 00:24:56.311 "name": null, 00:24:56.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.311 "is_configured": false, 00:24:56.311 "data_offset": 0, 00:24:56.311 "data_size": 7936 00:24:56.311 }, 00:24:56.311 { 00:24:56.311 "name": "BaseBdev2", 00:24:56.311 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:24:56.311 "is_configured": true, 00:24:56.311 "data_offset": 256, 00:24:56.311 "data_size": 7936 00:24:56.311 } 00:24:56.311 ] 00:24:56.311 }' 00:24:56.312 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.312 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.878 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:56.878 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.878 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.878 [2024-11-20 05:35:28.407968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.878 [2024-11-20 05:35:28.416081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:56.878 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.878 05:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:56.878 [2024-11-20 05:35:28.417695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.883 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:57.883 "name": "raid_bdev1", 00:24:57.883 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:24:57.883 "strip_size_kb": 0, 00:24:57.883 "state": "online", 00:24:57.883 "raid_level": "raid1", 00:24:57.883 "superblock": true, 00:24:57.883 "num_base_bdevs": 2, 00:24:57.883 "num_base_bdevs_discovered": 2, 00:24:57.883 "num_base_bdevs_operational": 2, 00:24:57.883 "process": { 00:24:57.884 "type": "rebuild", 00:24:57.884 "target": "spare", 00:24:57.884 "progress": { 00:24:57.884 "blocks": 2560, 00:24:57.884 "percent": 32 00:24:57.884 } 00:24:57.884 }, 00:24:57.884 "base_bdevs_list": [ 00:24:57.884 { 00:24:57.884 "name": "spare", 00:24:57.884 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:24:57.884 "is_configured": true, 00:24:57.884 "data_offset": 256, 00:24:57.884 "data_size": 7936 00:24:57.884 }, 00:24:57.884 { 00:24:57.884 "name": "BaseBdev2", 00:24:57.884 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:24:57.884 "is_configured": true, 00:24:57.884 "data_offset": 256, 00:24:57.884 "data_size": 7936 00:24:57.884 } 00:24:57.884 ] 00:24:57.884 }' 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.884 [2024-11-20 05:35:29.527951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:57.884 [2024-11-20 05:35:29.623267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:57.884 [2024-11-20 05:35:29.623339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.884 [2024-11-20 05:35:29.623352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:57.884 [2024-11-20 05:35:29.623378] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:57.884 "name": "raid_bdev1", 00:24:57.884 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:24:57.884 "strip_size_kb": 0, 00:24:57.884 "state": "online", 00:24:57.884 "raid_level": "raid1", 00:24:57.884 "superblock": true, 00:24:57.884 "num_base_bdevs": 2, 00:24:57.884 "num_base_bdevs_discovered": 1, 00:24:57.884 "num_base_bdevs_operational": 1, 00:24:57.884 "base_bdevs_list": [ 00:24:57.884 { 00:24:57.884 "name": null, 00:24:57.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.884 "is_configured": false, 00:24:57.884 "data_offset": 0, 00:24:57.884 "data_size": 7936 00:24:57.884 }, 00:24:57.884 { 00:24:57.884 "name": "BaseBdev2", 00:24:57.884 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:24:57.884 "is_configured": true, 00:24:57.884 "data_offset": 256, 00:24:57.884 "data_size": 7936 00:24:57.884 } 00:24:57.884 ] 00:24:57.884 }' 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:57.884 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.145 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.405 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:58.405 "name": "raid_bdev1", 00:24:58.405 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:24:58.405 "strip_size_kb": 0, 00:24:58.405 "state": "online", 00:24:58.405 "raid_level": "raid1", 00:24:58.405 "superblock": true, 00:24:58.405 "num_base_bdevs": 2, 00:24:58.405 "num_base_bdevs_discovered": 1, 00:24:58.405 "num_base_bdevs_operational": 1, 00:24:58.405 "base_bdevs_list": [ 00:24:58.405 { 00:24:58.405 "name": null, 00:24:58.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.405 "is_configured": false, 00:24:58.405 "data_offset": 0, 00:24:58.405 "data_size": 7936 00:24:58.405 }, 00:24:58.405 { 00:24:58.405 "name": "BaseBdev2", 00:24:58.405 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:24:58.405 "is_configured": true, 00:24:58.405 "data_offset": 256, 00:24:58.405 "data_size": 7936 00:24:58.405 } 00:24:58.405 ] 00:24:58.405 }' 00:24:58.405 05:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.405 05:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:58.405 05:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.405 05:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:58.405 05:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:58.405 05:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.405 05:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.405 [2024-11-20 05:35:30.055553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:58.405 [2024-11-20 05:35:30.063132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:58.405 05:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.405 05:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:58.405 [2024-11-20 05:35:30.064815] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.350 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:59.350 "name": "raid_bdev1", 00:24:59.350 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:24:59.350 "strip_size_kb": 0, 00:24:59.350 "state": "online", 00:24:59.350 "raid_level": "raid1", 00:24:59.350 "superblock": true, 00:24:59.350 "num_base_bdevs": 2, 00:24:59.350 "num_base_bdevs_discovered": 2, 00:24:59.350 "num_base_bdevs_operational": 2, 00:24:59.350 "process": { 00:24:59.350 "type": "rebuild", 00:24:59.350 "target": "spare", 00:24:59.350 "progress": { 00:24:59.350 "blocks": 2560, 00:24:59.350 "percent": 32 00:24:59.350 } 00:24:59.350 }, 00:24:59.350 "base_bdevs_list": [ 00:24:59.350 { 00:24:59.350 "name": "spare", 00:24:59.350 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:24:59.350 "is_configured": true, 00:24:59.350 "data_offset": 256, 00:24:59.350 "data_size": 7936 00:24:59.350 }, 00:24:59.350 { 00:24:59.351 "name": "BaseBdev2", 00:24:59.351 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:24:59.351 "is_configured": true, 00:24:59.351 "data_offset": 256, 00:24:59.351 "data_size": 7936 00:24:59.351 } 00:24:59.351 ] 00:24:59.351 }' 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:59.351 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=566 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.351 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.613 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.613 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:59.613 "name": "raid_bdev1", 00:24:59.613 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:24:59.613 "strip_size_kb": 0, 00:24:59.613 "state": "online", 00:24:59.613 "raid_level": "raid1", 00:24:59.613 "superblock": true, 00:24:59.613 "num_base_bdevs": 2, 00:24:59.613 "num_base_bdevs_discovered": 2, 00:24:59.613 "num_base_bdevs_operational": 2, 00:24:59.613 "process": { 00:24:59.613 "type": "rebuild", 00:24:59.613 "target": "spare", 00:24:59.613 "progress": { 00:24:59.613 "blocks": 2816, 00:24:59.613 "percent": 35 00:24:59.613 } 00:24:59.613 }, 00:24:59.613 "base_bdevs_list": [ 00:24:59.613 { 00:24:59.613 "name": "spare", 00:24:59.613 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:24:59.613 "is_configured": true, 00:24:59.613 "data_offset": 256, 00:24:59.613 "data_size": 7936 00:24:59.613 }, 00:24:59.613 { 00:24:59.613 "name": "BaseBdev2", 00:24:59.613 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:24:59.613 "is_configured": true, 00:24:59.613 "data_offset": 256, 00:24:59.613 "data_size": 7936 00:24:59.613 } 00:24:59.613 ] 00:24:59.613 }' 00:24:59.613 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:59.613 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.613 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:59.613 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.613 05:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:00.556 "name": "raid_bdev1", 00:25:00.556 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:00.556 "strip_size_kb": 0, 00:25:00.556 "state": "online", 00:25:00.556 "raid_level": "raid1", 00:25:00.556 "superblock": true, 00:25:00.556 "num_base_bdevs": 2, 00:25:00.556 "num_base_bdevs_discovered": 2, 00:25:00.556 "num_base_bdevs_operational": 2, 00:25:00.556 "process": { 00:25:00.556 "type": "rebuild", 00:25:00.556 "target": "spare", 00:25:00.556 "progress": { 00:25:00.556 "blocks": 5632, 00:25:00.556 "percent": 70 00:25:00.556 } 00:25:00.556 }, 00:25:00.556 "base_bdevs_list": [ 00:25:00.556 { 00:25:00.556 "name": "spare", 00:25:00.556 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:25:00.556 "is_configured": true, 00:25:00.556 "data_offset": 256, 00:25:00.556 "data_size": 7936 00:25:00.556 }, 00:25:00.556 { 00:25:00.556 "name": "BaseBdev2", 00:25:00.556 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:00.556 "is_configured": true, 00:25:00.556 "data_offset": 256, 00:25:00.556 "data_size": 7936 00:25:00.556 } 00:25:00.556 ] 00:25:00.556 }' 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.556 05:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:01.561 [2024-11-20 05:35:33.178663] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:01.561 [2024-11-20 05:35:33.178733] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:01.561 [2024-11-20 05:35:33.178828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.561 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:01.822 "name": "raid_bdev1", 00:25:01.822 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:01.822 "strip_size_kb": 0, 00:25:01.822 "state": "online", 00:25:01.822 "raid_level": "raid1", 00:25:01.822 "superblock": true, 00:25:01.822 "num_base_bdevs": 2, 00:25:01.822 "num_base_bdevs_discovered": 2, 00:25:01.822 "num_base_bdevs_operational": 2, 00:25:01.822 "base_bdevs_list": [ 00:25:01.822 { 00:25:01.822 "name": "spare", 00:25:01.822 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:25:01.822 "is_configured": true, 00:25:01.822 "data_offset": 256, 00:25:01.822 "data_size": 7936 00:25:01.822 }, 00:25:01.822 { 00:25:01.822 "name": "BaseBdev2", 00:25:01.822 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:01.822 "is_configured": true, 00:25:01.822 "data_offset": 256, 00:25:01.822 "data_size": 7936 00:25:01.822 } 00:25:01.822 ] 00:25:01.822 }' 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:01.822 "name": "raid_bdev1", 00:25:01.822 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:01.822 "strip_size_kb": 0, 00:25:01.822 "state": "online", 00:25:01.822 "raid_level": "raid1", 00:25:01.822 "superblock": true, 00:25:01.822 "num_base_bdevs": 2, 00:25:01.822 "num_base_bdevs_discovered": 2, 00:25:01.822 "num_base_bdevs_operational": 2, 00:25:01.822 "base_bdevs_list": [ 00:25:01.822 { 00:25:01.822 "name": "spare", 00:25:01.822 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:25:01.822 "is_configured": true, 00:25:01.822 "data_offset": 256, 00:25:01.822 "data_size": 7936 00:25:01.822 }, 00:25:01.822 { 00:25:01.822 "name": "BaseBdev2", 00:25:01.822 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:01.822 "is_configured": true, 00:25:01.822 "data_offset": 256, 00:25:01.822 "data_size": 7936 00:25:01.822 } 00:25:01.822 ] 00:25:01.822 }' 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.822 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.822 "name": "raid_bdev1", 00:25:01.823 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:01.823 "strip_size_kb": 0, 00:25:01.823 "state": "online", 00:25:01.823 "raid_level": "raid1", 00:25:01.823 "superblock": true, 00:25:01.823 "num_base_bdevs": 2, 00:25:01.823 "num_base_bdevs_discovered": 2, 00:25:01.823 "num_base_bdevs_operational": 2, 00:25:01.823 "base_bdevs_list": [ 00:25:01.823 { 00:25:01.823 "name": "spare", 00:25:01.823 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:25:01.823 "is_configured": true, 00:25:01.823 "data_offset": 256, 00:25:01.823 "data_size": 7936 00:25:01.823 }, 00:25:01.823 { 00:25:01.823 "name": "BaseBdev2", 00:25:01.823 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:01.823 "is_configured": true, 00:25:01.823 "data_offset": 256, 00:25:01.823 "data_size": 7936 00:25:01.823 } 00:25:01.823 ] 00:25:01.823 }' 00:25:01.823 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.823 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.083 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:02.083 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.083 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.083 [2024-11-20 05:35:33.895159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:02.083 [2024-11-20 05:35:33.895183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:02.083 [2024-11-20 05:35:33.895242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:02.083 [2024-11-20 05:35:33.895300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:02.083 [2024-11-20 05:35:33.895308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:02.083 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.083 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:25:02.083 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.083 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.083 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.083 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:02.342 05:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:02.342 /dev/nbd0 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:02.342 1+0 records in 00:25:02.342 1+0 records out 00:25:02.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166598 s, 24.6 MB/s 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:25:02.342 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:02.602 /dev/nbd1 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:02.602 1+0 records in 00:25:02.602 1+0 records out 00:25:02.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238294 s, 17.2 MB/s 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:02.602 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:02.862 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:02.862 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:02.862 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:02.862 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:02.862 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:02.862 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:02.862 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:03.123 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:03.383 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:03.383 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:03.383 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:03.383 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.383 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.383 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.384 [2024-11-20 05:35:34.977755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:03.384 [2024-11-20 05:35:34.977814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.384 [2024-11-20 05:35:34.977832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:03.384 [2024-11-20 05:35:34.977840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.384 [2024-11-20 05:35:34.979531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.384 [2024-11-20 05:35:34.979560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:03.384 [2024-11-20 05:35:34.979609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:03.384 [2024-11-20 05:35:34.979648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:03.384 [2024-11-20 05:35:34.979745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:03.384 spare 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.384 05:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.384 [2024-11-20 05:35:35.079811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:03.384 [2024-11-20 05:35:35.079855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:03.384 [2024-11-20 05:35:35.079950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:25:03.384 [2024-11-20 05:35:35.080080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:03.384 [2024-11-20 05:35:35.080088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:03.384 [2024-11-20 05:35:35.080192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.384 "name": "raid_bdev1", 00:25:03.384 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:03.384 "strip_size_kb": 0, 00:25:03.384 "state": "online", 00:25:03.384 "raid_level": "raid1", 00:25:03.384 "superblock": true, 00:25:03.384 "num_base_bdevs": 2, 00:25:03.384 "num_base_bdevs_discovered": 2, 00:25:03.384 "num_base_bdevs_operational": 2, 00:25:03.384 "base_bdevs_list": [ 00:25:03.384 { 00:25:03.384 "name": "spare", 00:25:03.384 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:25:03.384 "is_configured": true, 00:25:03.384 "data_offset": 256, 00:25:03.384 "data_size": 7936 00:25:03.384 }, 00:25:03.384 { 00:25:03.384 "name": "BaseBdev2", 00:25:03.384 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:03.384 "is_configured": true, 00:25:03.384 "data_offset": 256, 00:25:03.384 "data_size": 7936 00:25:03.384 } 00:25:03.384 ] 00:25:03.384 }' 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.384 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:03.645 "name": "raid_bdev1", 00:25:03.645 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:03.645 "strip_size_kb": 0, 00:25:03.645 "state": "online", 00:25:03.645 "raid_level": "raid1", 00:25:03.645 "superblock": true, 00:25:03.645 "num_base_bdevs": 2, 00:25:03.645 "num_base_bdevs_discovered": 2, 00:25:03.645 "num_base_bdevs_operational": 2, 00:25:03.645 "base_bdevs_list": [ 00:25:03.645 { 00:25:03.645 "name": "spare", 00:25:03.645 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:25:03.645 "is_configured": true, 00:25:03.645 "data_offset": 256, 00:25:03.645 "data_size": 7936 00:25:03.645 }, 00:25:03.645 { 00:25:03.645 "name": "BaseBdev2", 00:25:03.645 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:03.645 "is_configured": true, 00:25:03.645 "data_offset": 256, 00:25:03.645 "data_size": 7936 00:25:03.645 } 00:25:03.645 ] 00:25:03.645 }' 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:03.645 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.907 [2024-11-20 05:35:35.549900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.907 "name": "raid_bdev1", 00:25:03.907 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:03.907 "strip_size_kb": 0, 00:25:03.907 "state": "online", 00:25:03.907 "raid_level": "raid1", 00:25:03.907 "superblock": true, 00:25:03.907 "num_base_bdevs": 2, 00:25:03.907 "num_base_bdevs_discovered": 1, 00:25:03.907 "num_base_bdevs_operational": 1, 00:25:03.907 "base_bdevs_list": [ 00:25:03.907 { 00:25:03.907 "name": null, 00:25:03.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.907 "is_configured": false, 00:25:03.907 "data_offset": 0, 00:25:03.907 "data_size": 7936 00:25:03.907 }, 00:25:03.907 { 00:25:03.907 "name": "BaseBdev2", 00:25:03.907 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:03.907 "is_configured": true, 00:25:03.907 "data_offset": 256, 00:25:03.907 "data_size": 7936 00:25:03.907 } 00:25:03.907 ] 00:25:03.907 }' 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.907 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.248 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:04.248 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.248 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.248 [2024-11-20 05:35:35.881975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:04.248 [2024-11-20 05:35:35.882117] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:04.248 [2024-11-20 05:35:35.882130] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:04.249 [2024-11-20 05:35:35.882159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:04.249 [2024-11-20 05:35:35.889411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:25:04.249 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.249 05:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:04.249 [2024-11-20 05:35:35.890998] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:05.241 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:05.241 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:05.241 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:05.241 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:05.241 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:05.241 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.241 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.241 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:05.242 "name": "raid_bdev1", 00:25:05.242 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:05.242 "strip_size_kb": 0, 00:25:05.242 "state": "online", 00:25:05.242 "raid_level": "raid1", 00:25:05.242 "superblock": true, 00:25:05.242 "num_base_bdevs": 2, 00:25:05.242 "num_base_bdevs_discovered": 2, 00:25:05.242 "num_base_bdevs_operational": 2, 00:25:05.242 "process": { 00:25:05.242 "type": "rebuild", 00:25:05.242 "target": "spare", 00:25:05.242 "progress": { 00:25:05.242 "blocks": 2560, 00:25:05.242 "percent": 32 00:25:05.242 } 00:25:05.242 }, 00:25:05.242 "base_bdevs_list": [ 00:25:05.242 { 00:25:05.242 "name": "spare", 00:25:05.242 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:25:05.242 "is_configured": true, 00:25:05.242 "data_offset": 256, 00:25:05.242 "data_size": 7936 00:25:05.242 }, 00:25:05.242 { 00:25:05.242 "name": "BaseBdev2", 00:25:05.242 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:05.242 "is_configured": true, 00:25:05.242 "data_offset": 256, 00:25:05.242 "data_size": 7936 00:25:05.242 } 00:25:05.242 ] 00:25:05.242 }' 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.242 05:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.242 [2024-11-20 05:35:36.993701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:05.242 [2024-11-20 05:35:36.995981] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:05.242 [2024-11-20 05:35:36.996024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.242 [2024-11-20 05:35:36.996036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:05.242 [2024-11-20 05:35:36.996043] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:05.242 "name": "raid_bdev1", 00:25:05.242 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:05.242 "strip_size_kb": 0, 00:25:05.242 "state": "online", 00:25:05.242 "raid_level": "raid1", 00:25:05.242 "superblock": true, 00:25:05.242 "num_base_bdevs": 2, 00:25:05.242 "num_base_bdevs_discovered": 1, 00:25:05.242 "num_base_bdevs_operational": 1, 00:25:05.242 "base_bdevs_list": [ 00:25:05.242 { 00:25:05.242 "name": null, 00:25:05.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.242 "is_configured": false, 00:25:05.242 "data_offset": 0, 00:25:05.242 "data_size": 7936 00:25:05.242 }, 00:25:05.242 { 00:25:05.242 "name": "BaseBdev2", 00:25:05.242 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:05.242 "is_configured": true, 00:25:05.242 "data_offset": 256, 00:25:05.242 "data_size": 7936 00:25:05.242 } 00:25:05.242 ] 00:25:05.242 }' 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:05.242 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.503 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:05.503 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.503 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.503 [2024-11-20 05:35:37.336128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:05.763 [2024-11-20 05:35:37.336300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.763 [2024-11-20 05:35:37.336326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:05.763 [2024-11-20 05:35:37.336337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.763 [2024-11-20 05:35:37.336540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.763 [2024-11-20 05:35:37.336553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:05.763 [2024-11-20 05:35:37.336600] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:05.763 [2024-11-20 05:35:37.336611] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:05.763 [2024-11-20 05:35:37.336619] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:05.763 [2024-11-20 05:35:37.336640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:05.763 [2024-11-20 05:35:37.343684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:25:05.763 spare 00:25:05.763 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.763 05:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:05.763 [2024-11-20 05:35:37.345217] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:06.705 "name": "raid_bdev1", 00:25:06.705 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:06.705 "strip_size_kb": 0, 00:25:06.705 "state": "online", 00:25:06.705 "raid_level": "raid1", 00:25:06.705 "superblock": true, 00:25:06.705 "num_base_bdevs": 2, 00:25:06.705 "num_base_bdevs_discovered": 2, 00:25:06.705 "num_base_bdevs_operational": 2, 00:25:06.705 "process": { 00:25:06.705 "type": "rebuild", 00:25:06.705 "target": "spare", 00:25:06.705 "progress": { 00:25:06.705 "blocks": 2560, 00:25:06.705 "percent": 32 00:25:06.705 } 00:25:06.705 }, 00:25:06.705 "base_bdevs_list": [ 00:25:06.705 { 00:25:06.705 "name": "spare", 00:25:06.705 "uuid": "7ca532d9-ceb1-5ce3-a8a8-b69204d21618", 00:25:06.705 "is_configured": true, 00:25:06.705 "data_offset": 256, 00:25:06.705 "data_size": 7936 00:25:06.705 }, 00:25:06.705 { 00:25:06.705 "name": "BaseBdev2", 00:25:06.705 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:06.705 "is_configured": true, 00:25:06.705 "data_offset": 256, 00:25:06.705 "data_size": 7936 00:25:06.705 } 00:25:06.705 ] 00:25:06.705 }' 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:06.705 [2024-11-20 05:35:38.447904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:06.705 [2024-11-20 05:35:38.450273] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:06.705 [2024-11-20 05:35:38.450316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.705 [2024-11-20 05:35:38.450329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:06.705 [2024-11-20 05:35:38.450335] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.705 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.706 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.706 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.706 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.706 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:06.706 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.706 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.706 "name": "raid_bdev1", 00:25:06.706 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:06.706 "strip_size_kb": 0, 00:25:06.706 "state": "online", 00:25:06.706 "raid_level": "raid1", 00:25:06.706 "superblock": true, 00:25:06.706 "num_base_bdevs": 2, 00:25:06.706 "num_base_bdevs_discovered": 1, 00:25:06.706 "num_base_bdevs_operational": 1, 00:25:06.706 "base_bdevs_list": [ 00:25:06.706 { 00:25:06.706 "name": null, 00:25:06.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.706 "is_configured": false, 00:25:06.706 "data_offset": 0, 00:25:06.706 "data_size": 7936 00:25:06.706 }, 00:25:06.706 { 00:25:06.706 "name": "BaseBdev2", 00:25:06.706 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:06.706 "is_configured": true, 00:25:06.706 "data_offset": 256, 00:25:06.706 "data_size": 7936 00:25:06.706 } 00:25:06.706 ] 00:25:06.706 }' 00:25:06.706 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.706 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:06.966 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:06.966 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:06.966 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:06.966 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:06.966 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:06.966 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.966 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.966 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.966 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:07.228 "name": "raid_bdev1", 00:25:07.228 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:07.228 "strip_size_kb": 0, 00:25:07.228 "state": "online", 00:25:07.228 "raid_level": "raid1", 00:25:07.228 "superblock": true, 00:25:07.228 "num_base_bdevs": 2, 00:25:07.228 "num_base_bdevs_discovered": 1, 00:25:07.228 "num_base_bdevs_operational": 1, 00:25:07.228 "base_bdevs_list": [ 00:25:07.228 { 00:25:07.228 "name": null, 00:25:07.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.228 "is_configured": false, 00:25:07.228 "data_offset": 0, 00:25:07.228 "data_size": 7936 00:25:07.228 }, 00:25:07.228 { 00:25:07.228 "name": "BaseBdev2", 00:25:07.228 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:07.228 "is_configured": true, 00:25:07.228 "data_offset": 256, 00:25:07.228 "data_size": 7936 00:25:07.228 } 00:25:07.228 ] 00:25:07.228 }' 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.228 [2024-11-20 05:35:38.894421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:07.228 [2024-11-20 05:35:38.894464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.228 [2024-11-20 05:35:38.894481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:07.228 [2024-11-20 05:35:38.894488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.228 [2024-11-20 05:35:38.894651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.228 [2024-11-20 05:35:38.894660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:07.228 [2024-11-20 05:35:38.894697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:07.228 [2024-11-20 05:35:38.894707] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:07.228 [2024-11-20 05:35:38.894714] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:07.228 [2024-11-20 05:35:38.894721] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:07.228 BaseBdev1 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.228 05:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:08.169 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:08.169 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:08.169 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.169 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.169 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.170 "name": "raid_bdev1", 00:25:08.170 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:08.170 "strip_size_kb": 0, 00:25:08.170 "state": "online", 00:25:08.170 "raid_level": "raid1", 00:25:08.170 "superblock": true, 00:25:08.170 "num_base_bdevs": 2, 00:25:08.170 "num_base_bdevs_discovered": 1, 00:25:08.170 "num_base_bdevs_operational": 1, 00:25:08.170 "base_bdevs_list": [ 00:25:08.170 { 00:25:08.170 "name": null, 00:25:08.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.170 "is_configured": false, 00:25:08.170 "data_offset": 0, 00:25:08.170 "data_size": 7936 00:25:08.170 }, 00:25:08.170 { 00:25:08.170 "name": "BaseBdev2", 00:25:08.170 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:08.170 "is_configured": true, 00:25:08.170 "data_offset": 256, 00:25:08.170 "data_size": 7936 00:25:08.170 } 00:25:08.170 ] 00:25:08.170 }' 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.170 05:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.428 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.685 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:08.685 "name": "raid_bdev1", 00:25:08.685 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:08.685 "strip_size_kb": 0, 00:25:08.685 "state": "online", 00:25:08.685 "raid_level": "raid1", 00:25:08.685 "superblock": true, 00:25:08.685 "num_base_bdevs": 2, 00:25:08.685 "num_base_bdevs_discovered": 1, 00:25:08.685 "num_base_bdevs_operational": 1, 00:25:08.685 "base_bdevs_list": [ 00:25:08.685 { 00:25:08.685 "name": null, 00:25:08.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.685 "is_configured": false, 00:25:08.685 "data_offset": 0, 00:25:08.685 "data_size": 7936 00:25:08.685 }, 00:25:08.685 { 00:25:08.685 "name": "BaseBdev2", 00:25:08.685 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:08.685 "is_configured": true, 00:25:08.685 "data_offset": 256, 00:25:08.685 "data_size": 7936 00:25:08.685 } 00:25:08.685 ] 00:25:08.685 }' 00:25:08.685 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:08.685 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.686 [2024-11-20 05:35:40.330954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:08.686 [2024-11-20 05:35:40.331169] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:08.686 [2024-11-20 05:35:40.331250] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:08.686 request: 00:25:08.686 { 00:25:08.686 "base_bdev": "BaseBdev1", 00:25:08.686 "raid_bdev": "raid_bdev1", 00:25:08.686 "method": "bdev_raid_add_base_bdev", 00:25:08.686 "req_id": 1 00:25:08.686 } 00:25:08.686 Got JSON-RPC error response 00:25:08.686 response: 00:25:08.686 { 00:25:08.686 "code": -22, 00:25:08.686 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:08.686 } 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:08.686 05:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.618 "name": "raid_bdev1", 00:25:09.618 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:09.618 "strip_size_kb": 0, 00:25:09.618 "state": "online", 00:25:09.618 "raid_level": "raid1", 00:25:09.618 "superblock": true, 00:25:09.618 "num_base_bdevs": 2, 00:25:09.618 "num_base_bdevs_discovered": 1, 00:25:09.618 "num_base_bdevs_operational": 1, 00:25:09.618 "base_bdevs_list": [ 00:25:09.618 { 00:25:09.618 "name": null, 00:25:09.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.618 "is_configured": false, 00:25:09.618 "data_offset": 0, 00:25:09.618 "data_size": 7936 00:25:09.618 }, 00:25:09.618 { 00:25:09.618 "name": "BaseBdev2", 00:25:09.618 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:09.618 "is_configured": true, 00:25:09.618 "data_offset": 256, 00:25:09.618 "data_size": 7936 00:25:09.618 } 00:25:09.618 ] 00:25:09.618 }' 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.618 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:09.876 "name": "raid_bdev1", 00:25:09.876 "uuid": "73f86049-c894-4382-9091-c42be22d5f91", 00:25:09.876 "strip_size_kb": 0, 00:25:09.876 "state": "online", 00:25:09.876 "raid_level": "raid1", 00:25:09.876 "superblock": true, 00:25:09.876 "num_base_bdevs": 2, 00:25:09.876 "num_base_bdevs_discovered": 1, 00:25:09.876 "num_base_bdevs_operational": 1, 00:25:09.876 "base_bdevs_list": [ 00:25:09.876 { 00:25:09.876 "name": null, 00:25:09.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.876 "is_configured": false, 00:25:09.876 "data_offset": 0, 00:25:09.876 "data_size": 7936 00:25:09.876 }, 00:25:09.876 { 00:25:09.876 "name": "BaseBdev2", 00:25:09.876 "uuid": "5c64022d-bac3-5599-b63a-b9e5f38512e1", 00:25:09.876 "is_configured": true, 00:25:09.876 "data_offset": 256, 00:25:09.876 "data_size": 7936 00:25:09.876 } 00:25:09.876 ] 00:25:09.876 }' 00:25:09.876 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:10.133 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:10.133 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:10.133 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:10.133 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 85283 00:25:10.133 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 85283 ']' 00:25:10.133 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 85283 00:25:10.133 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:25:10.133 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.133 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85283 00:25:10.133 killing process with pid 85283 00:25:10.133 Received shutdown signal, test time was about 60.000000 seconds 00:25:10.133 00:25:10.134 Latency(us) 00:25:10.134 [2024-11-20T05:35:41.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.134 [2024-11-20T05:35:41.969Z] =================================================================================================================== 00:25:10.134 [2024-11-20T05:35:41.969Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:10.134 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:10.134 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:10.134 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85283' 00:25:10.134 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 85283 00:25:10.134 05:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 85283 00:25:10.134 [2024-11-20 05:35:41.781143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:10.134 [2024-11-20 05:35:41.781238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:10.134 [2024-11-20 05:35:41.781274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:10.134 [2024-11-20 05:35:41.781283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:10.134 [2024-11-20 05:35:41.938553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:10.698 05:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:25:10.698 00:25:10.698 real 0m16.977s 00:25:10.698 user 0m21.684s 00:25:10.698 sys 0m1.927s 00:25:10.698 05:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:10.698 ************************************ 00:25:10.698 END TEST raid_rebuild_test_sb_md_separate 00:25:10.698 ************************************ 00:25:10.698 05:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.698 05:35:42 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:25:10.698 05:35:42 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:25:10.698 05:35:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:10.698 05:35:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:10.698 05:35:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:10.698 ************************************ 00:25:10.698 START TEST raid_state_function_test_sb_md_interleaved 00:25:10.698 ************************************ 00:25:10.698 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:25:10.698 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:10.698 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:10.698 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:10.698 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:10.955 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=85947 00:25:10.956 Process raid pid: 85947 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85947' 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 85947 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 85947 ']' 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:10.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:10.956 05:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.956 [2024-11-20 05:35:42.598789] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:10.956 [2024-11-20 05:35:42.599021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.956 [2024-11-20 05:35:42.755572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.214 [2024-11-20 05:35:42.839806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.214 [2024-11-20 05:35:42.949686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:11.214 [2024-11-20 05:35:42.949714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:11.783 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:11.783 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:25:11.783 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:11.783 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.783 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:11.783 [2024-11-20 05:35:43.517917] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:11.783 [2024-11-20 05:35:43.517963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:11.783 [2024-11-20 05:35:43.517972] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:11.783 [2024-11-20 05:35:43.517979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:11.783 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.783 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.784 "name": "Existed_Raid", 00:25:11.784 "uuid": "f5640113-107f-4f10-8b95-7433d4559fbf", 00:25:11.784 "strip_size_kb": 0, 00:25:11.784 "state": "configuring", 00:25:11.784 "raid_level": "raid1", 00:25:11.784 "superblock": true, 00:25:11.784 "num_base_bdevs": 2, 00:25:11.784 "num_base_bdevs_discovered": 0, 00:25:11.784 "num_base_bdevs_operational": 2, 00:25:11.784 "base_bdevs_list": [ 00:25:11.784 { 00:25:11.784 "name": "BaseBdev1", 00:25:11.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.784 "is_configured": false, 00:25:11.784 "data_offset": 0, 00:25:11.784 "data_size": 0 00:25:11.784 }, 00:25:11.784 { 00:25:11.784 "name": "BaseBdev2", 00:25:11.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.784 "is_configured": false, 00:25:11.784 "data_offset": 0, 00:25:11.784 "data_size": 0 00:25:11.784 } 00:25:11.784 ] 00:25:11.784 }' 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.784 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.042 [2024-11-20 05:35:43.845927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:12.042 [2024-11-20 05:35:43.845956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.042 [2024-11-20 05:35:43.853937] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:12.042 [2024-11-20 05:35:43.853971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:12.042 [2024-11-20 05:35:43.853979] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:12.042 [2024-11-20 05:35:43.853988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.042 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.299 [2024-11-20 05:35:43.881833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:12.299 BaseBdev1 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.299 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.299 [ 00:25:12.299 { 00:25:12.299 "name": "BaseBdev1", 00:25:12.299 "aliases": [ 00:25:12.299 "e2dee418-20ac-49d5-bd2c-b2b3934835f8" 00:25:12.299 ], 00:25:12.299 "product_name": "Malloc disk", 00:25:12.299 "block_size": 4128, 00:25:12.299 "num_blocks": 8192, 00:25:12.299 "uuid": "e2dee418-20ac-49d5-bd2c-b2b3934835f8", 00:25:12.299 "md_size": 32, 00:25:12.299 "md_interleave": true, 00:25:12.299 "dif_type": 0, 00:25:12.299 "assigned_rate_limits": { 00:25:12.299 "rw_ios_per_sec": 0, 00:25:12.299 "rw_mbytes_per_sec": 0, 00:25:12.299 "r_mbytes_per_sec": 0, 00:25:12.299 "w_mbytes_per_sec": 0 00:25:12.299 }, 00:25:12.299 "claimed": true, 00:25:12.299 "claim_type": "exclusive_write", 00:25:12.299 "zoned": false, 00:25:12.299 "supported_io_types": { 00:25:12.299 "read": true, 00:25:12.299 "write": true, 00:25:12.299 "unmap": true, 00:25:12.299 "flush": true, 00:25:12.299 "reset": true, 00:25:12.299 "nvme_admin": false, 00:25:12.299 "nvme_io": false, 00:25:12.299 "nvme_io_md": false, 00:25:12.299 "write_zeroes": true, 00:25:12.299 "zcopy": true, 00:25:12.299 "get_zone_info": false, 00:25:12.299 "zone_management": false, 00:25:12.299 "zone_append": false, 00:25:12.299 "compare": false, 00:25:12.299 "compare_and_write": false, 00:25:12.300 "abort": true, 00:25:12.300 "seek_hole": false, 00:25:12.300 "seek_data": false, 00:25:12.300 "copy": true, 00:25:12.300 "nvme_iov_md": false 00:25:12.300 }, 00:25:12.300 "memory_domains": [ 00:25:12.300 { 00:25:12.300 "dma_device_id": "system", 00:25:12.300 "dma_device_type": 1 00:25:12.300 }, 00:25:12.300 { 00:25:12.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.300 "dma_device_type": 2 00:25:12.300 } 00:25:12.300 ], 00:25:12.300 "driver_specific": {} 00:25:12.300 } 00:25:12.300 ] 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.300 "name": "Existed_Raid", 00:25:12.300 "uuid": "95468249-6183-4399-94c4-49408a3c14f1", 00:25:12.300 "strip_size_kb": 0, 00:25:12.300 "state": "configuring", 00:25:12.300 "raid_level": "raid1", 00:25:12.300 "superblock": true, 00:25:12.300 "num_base_bdevs": 2, 00:25:12.300 "num_base_bdevs_discovered": 1, 00:25:12.300 "num_base_bdevs_operational": 2, 00:25:12.300 "base_bdevs_list": [ 00:25:12.300 { 00:25:12.300 "name": "BaseBdev1", 00:25:12.300 "uuid": "e2dee418-20ac-49d5-bd2c-b2b3934835f8", 00:25:12.300 "is_configured": true, 00:25:12.300 "data_offset": 256, 00:25:12.300 "data_size": 7936 00:25:12.300 }, 00:25:12.300 { 00:25:12.300 "name": "BaseBdev2", 00:25:12.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.300 "is_configured": false, 00:25:12.300 "data_offset": 0, 00:25:12.300 "data_size": 0 00:25:12.300 } 00:25:12.300 ] 00:25:12.300 }' 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.300 05:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.558 [2024-11-20 05:35:44.225968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:12.558 [2024-11-20 05:35:44.226009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.558 [2024-11-20 05:35:44.234013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:12.558 [2024-11-20 05:35:44.235556] bdev.c:8348:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:12.558 [2024-11-20 05:35:44.235591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.558 "name": "Existed_Raid", 00:25:12.558 "uuid": "e50ffb8c-760b-44a8-88fe-6a9793670381", 00:25:12.558 "strip_size_kb": 0, 00:25:12.558 "state": "configuring", 00:25:12.558 "raid_level": "raid1", 00:25:12.558 "superblock": true, 00:25:12.558 "num_base_bdevs": 2, 00:25:12.558 "num_base_bdevs_discovered": 1, 00:25:12.558 "num_base_bdevs_operational": 2, 00:25:12.558 "base_bdevs_list": [ 00:25:12.558 { 00:25:12.558 "name": "BaseBdev1", 00:25:12.558 "uuid": "e2dee418-20ac-49d5-bd2c-b2b3934835f8", 00:25:12.558 "is_configured": true, 00:25:12.558 "data_offset": 256, 00:25:12.558 "data_size": 7936 00:25:12.558 }, 00:25:12.558 { 00:25:12.558 "name": "BaseBdev2", 00:25:12.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.558 "is_configured": false, 00:25:12.558 "data_offset": 0, 00:25:12.558 "data_size": 0 00:25:12.558 } 00:25:12.558 ] 00:25:12.558 }' 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.558 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.815 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:25:12.815 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.815 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.815 [2024-11-20 05:35:44.596653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:12.815 [2024-11-20 05:35:44.597059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:12.816 [2024-11-20 05:35:44.597074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:12.816 [2024-11-20 05:35:44.597153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:12.816 [2024-11-20 05:35:44.597211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:12.816 [2024-11-20 05:35:44.597220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:12.816 [2024-11-20 05:35:44.597267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.816 BaseBdev2 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.816 [ 00:25:12.816 { 00:25:12.816 "name": "BaseBdev2", 00:25:12.816 "aliases": [ 00:25:12.816 "23bcd00c-74d8-4d36-9caf-8e9145eee7dd" 00:25:12.816 ], 00:25:12.816 "product_name": "Malloc disk", 00:25:12.816 "block_size": 4128, 00:25:12.816 "num_blocks": 8192, 00:25:12.816 "uuid": "23bcd00c-74d8-4d36-9caf-8e9145eee7dd", 00:25:12.816 "md_size": 32, 00:25:12.816 "md_interleave": true, 00:25:12.816 "dif_type": 0, 00:25:12.816 "assigned_rate_limits": { 00:25:12.816 "rw_ios_per_sec": 0, 00:25:12.816 "rw_mbytes_per_sec": 0, 00:25:12.816 "r_mbytes_per_sec": 0, 00:25:12.816 "w_mbytes_per_sec": 0 00:25:12.816 }, 00:25:12.816 "claimed": true, 00:25:12.816 "claim_type": "exclusive_write", 00:25:12.816 "zoned": false, 00:25:12.816 "supported_io_types": { 00:25:12.816 "read": true, 00:25:12.816 "write": true, 00:25:12.816 "unmap": true, 00:25:12.816 "flush": true, 00:25:12.816 "reset": true, 00:25:12.816 "nvme_admin": false, 00:25:12.816 "nvme_io": false, 00:25:12.816 "nvme_io_md": false, 00:25:12.816 "write_zeroes": true, 00:25:12.816 "zcopy": true, 00:25:12.816 "get_zone_info": false, 00:25:12.816 "zone_management": false, 00:25:12.816 "zone_append": false, 00:25:12.816 "compare": false, 00:25:12.816 "compare_and_write": false, 00:25:12.816 "abort": true, 00:25:12.816 "seek_hole": false, 00:25:12.816 "seek_data": false, 00:25:12.816 "copy": true, 00:25:12.816 "nvme_iov_md": false 00:25:12.816 }, 00:25:12.816 "memory_domains": [ 00:25:12.816 { 00:25:12.816 "dma_device_id": "system", 00:25:12.816 "dma_device_type": 1 00:25:12.816 }, 00:25:12.816 { 00:25:12.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.816 "dma_device_type": 2 00:25:12.816 } 00:25:12.816 ], 00:25:12.816 "driver_specific": {} 00:25:12.816 } 00:25:12.816 ] 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.816 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.073 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.073 "name": "Existed_Raid", 00:25:13.073 "uuid": "e50ffb8c-760b-44a8-88fe-6a9793670381", 00:25:13.073 "strip_size_kb": 0, 00:25:13.073 "state": "online", 00:25:13.073 "raid_level": "raid1", 00:25:13.073 "superblock": true, 00:25:13.074 "num_base_bdevs": 2, 00:25:13.074 "num_base_bdevs_discovered": 2, 00:25:13.074 "num_base_bdevs_operational": 2, 00:25:13.074 "base_bdevs_list": [ 00:25:13.074 { 00:25:13.074 "name": "BaseBdev1", 00:25:13.074 "uuid": "e2dee418-20ac-49d5-bd2c-b2b3934835f8", 00:25:13.074 "is_configured": true, 00:25:13.074 "data_offset": 256, 00:25:13.074 "data_size": 7936 00:25:13.074 }, 00:25:13.074 { 00:25:13.074 "name": "BaseBdev2", 00:25:13.074 "uuid": "23bcd00c-74d8-4d36-9caf-8e9145eee7dd", 00:25:13.074 "is_configured": true, 00:25:13.074 "data_offset": 256, 00:25:13.074 "data_size": 7936 00:25:13.074 } 00:25:13.074 ] 00:25:13.074 }' 00:25:13.074 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.074 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:13.350 [2024-11-20 05:35:44.953032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.350 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:13.350 "name": "Existed_Raid", 00:25:13.350 "aliases": [ 00:25:13.350 "e50ffb8c-760b-44a8-88fe-6a9793670381" 00:25:13.350 ], 00:25:13.350 "product_name": "Raid Volume", 00:25:13.350 "block_size": 4128, 00:25:13.350 "num_blocks": 7936, 00:25:13.350 "uuid": "e50ffb8c-760b-44a8-88fe-6a9793670381", 00:25:13.350 "md_size": 32, 00:25:13.350 "md_interleave": true, 00:25:13.350 "dif_type": 0, 00:25:13.350 "assigned_rate_limits": { 00:25:13.350 "rw_ios_per_sec": 0, 00:25:13.350 "rw_mbytes_per_sec": 0, 00:25:13.350 "r_mbytes_per_sec": 0, 00:25:13.351 "w_mbytes_per_sec": 0 00:25:13.351 }, 00:25:13.351 "claimed": false, 00:25:13.351 "zoned": false, 00:25:13.351 "supported_io_types": { 00:25:13.351 "read": true, 00:25:13.351 "write": true, 00:25:13.351 "unmap": false, 00:25:13.351 "flush": false, 00:25:13.351 "reset": true, 00:25:13.351 "nvme_admin": false, 00:25:13.351 "nvme_io": false, 00:25:13.351 "nvme_io_md": false, 00:25:13.351 "write_zeroes": true, 00:25:13.351 "zcopy": false, 00:25:13.351 "get_zone_info": false, 00:25:13.351 "zone_management": false, 00:25:13.351 "zone_append": false, 00:25:13.351 "compare": false, 00:25:13.351 "compare_and_write": false, 00:25:13.351 "abort": false, 00:25:13.351 "seek_hole": false, 00:25:13.351 "seek_data": false, 00:25:13.351 "copy": false, 00:25:13.351 "nvme_iov_md": false 00:25:13.351 }, 00:25:13.351 "memory_domains": [ 00:25:13.351 { 00:25:13.351 "dma_device_id": "system", 00:25:13.351 "dma_device_type": 1 00:25:13.351 }, 00:25:13.351 { 00:25:13.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.351 "dma_device_type": 2 00:25:13.351 }, 00:25:13.351 { 00:25:13.351 "dma_device_id": "system", 00:25:13.351 "dma_device_type": 1 00:25:13.351 }, 00:25:13.351 { 00:25:13.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.351 "dma_device_type": 2 00:25:13.351 } 00:25:13.351 ], 00:25:13.351 "driver_specific": { 00:25:13.351 "raid": { 00:25:13.351 "uuid": "e50ffb8c-760b-44a8-88fe-6a9793670381", 00:25:13.351 "strip_size_kb": 0, 00:25:13.351 "state": "online", 00:25:13.351 "raid_level": "raid1", 00:25:13.351 "superblock": true, 00:25:13.351 "num_base_bdevs": 2, 00:25:13.351 "num_base_bdevs_discovered": 2, 00:25:13.351 "num_base_bdevs_operational": 2, 00:25:13.351 "base_bdevs_list": [ 00:25:13.351 { 00:25:13.351 "name": "BaseBdev1", 00:25:13.351 "uuid": "e2dee418-20ac-49d5-bd2c-b2b3934835f8", 00:25:13.351 "is_configured": true, 00:25:13.351 "data_offset": 256, 00:25:13.351 "data_size": 7936 00:25:13.351 }, 00:25:13.351 { 00:25:13.351 "name": "BaseBdev2", 00:25:13.351 "uuid": "23bcd00c-74d8-4d36-9caf-8e9145eee7dd", 00:25:13.351 "is_configured": true, 00:25:13.351 "data_offset": 256, 00:25:13.351 "data_size": 7936 00:25:13.351 } 00:25:13.351 ] 00:25:13.351 } 00:25:13.351 } 00:25:13.351 }' 00:25:13.351 05:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:13.351 BaseBdev2' 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.351 [2024-11-20 05:35:45.112848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.351 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.609 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.609 "name": "Existed_Raid", 00:25:13.609 "uuid": "e50ffb8c-760b-44a8-88fe-6a9793670381", 00:25:13.609 "strip_size_kb": 0, 00:25:13.609 "state": "online", 00:25:13.609 "raid_level": "raid1", 00:25:13.609 "superblock": true, 00:25:13.609 "num_base_bdevs": 2, 00:25:13.609 "num_base_bdevs_discovered": 1, 00:25:13.609 "num_base_bdevs_operational": 1, 00:25:13.609 "base_bdevs_list": [ 00:25:13.609 { 00:25:13.609 "name": null, 00:25:13.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.609 "is_configured": false, 00:25:13.609 "data_offset": 0, 00:25:13.609 "data_size": 7936 00:25:13.609 }, 00:25:13.609 { 00:25:13.609 "name": "BaseBdev2", 00:25:13.609 "uuid": "23bcd00c-74d8-4d36-9caf-8e9145eee7dd", 00:25:13.609 "is_configured": true, 00:25:13.609 "data_offset": 256, 00:25:13.609 "data_size": 7936 00:25:13.609 } 00:25:13.609 ] 00:25:13.609 }' 00:25:13.609 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.609 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.867 [2024-11-20 05:35:45.532052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:13.867 [2024-11-20 05:35:45.532250] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:13.867 [2024-11-20 05:35:45.579048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:13.867 [2024-11-20 05:35:45.579229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:13.867 [2024-11-20 05:35:45.579247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.867 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 85947 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 85947 ']' 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 85947 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85947 00:25:13.868 killing process with pid 85947 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85947' 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 85947 00:25:13.868 [2024-11-20 05:35:45.644961] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:13.868 05:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 85947 00:25:13.868 [2024-11-20 05:35:45.653299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:14.432 05:35:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:25:14.432 00:25:14.432 real 0m3.698s 00:25:14.432 user 0m5.476s 00:25:14.432 sys 0m0.577s 00:25:14.432 05:35:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:14.432 05:35:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 ************************************ 00:25:14.432 END TEST raid_state_function_test_sb_md_interleaved 00:25:14.432 ************************************ 00:25:14.432 05:35:46 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:25:14.432 05:35:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:14.432 05:35:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:14.432 05:35:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:14.689 ************************************ 00:25:14.689 START TEST raid_superblock_test_md_interleaved 00:25:14.689 ************************************ 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=86188 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 86188 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 86188 ']' 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:14.689 05:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.689 [2024-11-20 05:35:46.336817] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:14.689 [2024-11-20 05:35:46.337154] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86188 ] 00:25:14.689 [2024-11-20 05:35:46.497381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.127 [2024-11-20 05:35:46.597513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.127 [2024-11-20 05:35:46.732255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:15.127 [2024-11-20 05:35:46.732457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 malloc1 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.387 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 [2024-11-20 05:35:47.216883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:15.387 [2024-11-20 05:35:47.216939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.387 [2024-11-20 05:35:47.216959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:15.387 [2024-11-20 05:35:47.216969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.387 [2024-11-20 05:35:47.218879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.387 [2024-11-20 05:35:47.218913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:15.645 pt1 00:25:15.645 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.645 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:15.645 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:15.645 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:15.645 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:15.645 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:15.645 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:15.645 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.646 malloc2 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.646 [2024-11-20 05:35:47.252679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:15.646 [2024-11-20 05:35:47.252736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.646 [2024-11-20 05:35:47.252756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:15.646 [2024-11-20 05:35:47.252765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.646 [2024-11-20 05:35:47.254710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.646 [2024-11-20 05:35:47.254742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:15.646 pt2 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.646 [2024-11-20 05:35:47.260727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:15.646 [2024-11-20 05:35:47.262637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:15.646 [2024-11-20 05:35:47.262814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:15.646 [2024-11-20 05:35:47.262826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:15.646 [2024-11-20 05:35:47.262904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:15.646 [2024-11-20 05:35:47.262971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:15.646 [2024-11-20 05:35:47.262981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:15.646 [2024-11-20 05:35:47.263052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.646 "name": "raid_bdev1", 00:25:15.646 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:15.646 "strip_size_kb": 0, 00:25:15.646 "state": "online", 00:25:15.646 "raid_level": "raid1", 00:25:15.646 "superblock": true, 00:25:15.646 "num_base_bdevs": 2, 00:25:15.646 "num_base_bdevs_discovered": 2, 00:25:15.646 "num_base_bdevs_operational": 2, 00:25:15.646 "base_bdevs_list": [ 00:25:15.646 { 00:25:15.646 "name": "pt1", 00:25:15.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:15.646 "is_configured": true, 00:25:15.646 "data_offset": 256, 00:25:15.646 "data_size": 7936 00:25:15.646 }, 00:25:15.646 { 00:25:15.646 "name": "pt2", 00:25:15.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:15.646 "is_configured": true, 00:25:15.646 "data_offset": 256, 00:25:15.646 "data_size": 7936 00:25:15.646 } 00:25:15.646 ] 00:25:15.646 }' 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.646 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.903 [2024-11-20 05:35:47.597072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.903 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:15.903 "name": "raid_bdev1", 00:25:15.903 "aliases": [ 00:25:15.903 "8c7d935e-bac8-4f16-bbec-011527af58cc" 00:25:15.904 ], 00:25:15.904 "product_name": "Raid Volume", 00:25:15.904 "block_size": 4128, 00:25:15.904 "num_blocks": 7936, 00:25:15.904 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:15.904 "md_size": 32, 00:25:15.904 "md_interleave": true, 00:25:15.904 "dif_type": 0, 00:25:15.904 "assigned_rate_limits": { 00:25:15.904 "rw_ios_per_sec": 0, 00:25:15.904 "rw_mbytes_per_sec": 0, 00:25:15.904 "r_mbytes_per_sec": 0, 00:25:15.904 "w_mbytes_per_sec": 0 00:25:15.904 }, 00:25:15.904 "claimed": false, 00:25:15.904 "zoned": false, 00:25:15.904 "supported_io_types": { 00:25:15.904 "read": true, 00:25:15.904 "write": true, 00:25:15.904 "unmap": false, 00:25:15.904 "flush": false, 00:25:15.904 "reset": true, 00:25:15.904 "nvme_admin": false, 00:25:15.904 "nvme_io": false, 00:25:15.904 "nvme_io_md": false, 00:25:15.904 "write_zeroes": true, 00:25:15.904 "zcopy": false, 00:25:15.904 "get_zone_info": false, 00:25:15.904 "zone_management": false, 00:25:15.904 "zone_append": false, 00:25:15.904 "compare": false, 00:25:15.904 "compare_and_write": false, 00:25:15.904 "abort": false, 00:25:15.904 "seek_hole": false, 00:25:15.904 "seek_data": false, 00:25:15.904 "copy": false, 00:25:15.904 "nvme_iov_md": false 00:25:15.904 }, 00:25:15.904 "memory_domains": [ 00:25:15.904 { 00:25:15.904 "dma_device_id": "system", 00:25:15.904 "dma_device_type": 1 00:25:15.904 }, 00:25:15.904 { 00:25:15.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.904 "dma_device_type": 2 00:25:15.904 }, 00:25:15.904 { 00:25:15.904 "dma_device_id": "system", 00:25:15.904 "dma_device_type": 1 00:25:15.904 }, 00:25:15.904 { 00:25:15.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.904 "dma_device_type": 2 00:25:15.904 } 00:25:15.904 ], 00:25:15.904 "driver_specific": { 00:25:15.904 "raid": { 00:25:15.904 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:15.904 "strip_size_kb": 0, 00:25:15.904 "state": "online", 00:25:15.904 "raid_level": "raid1", 00:25:15.904 "superblock": true, 00:25:15.904 "num_base_bdevs": 2, 00:25:15.904 "num_base_bdevs_discovered": 2, 00:25:15.904 "num_base_bdevs_operational": 2, 00:25:15.904 "base_bdevs_list": [ 00:25:15.904 { 00:25:15.904 "name": "pt1", 00:25:15.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:15.904 "is_configured": true, 00:25:15.904 "data_offset": 256, 00:25:15.904 "data_size": 7936 00:25:15.904 }, 00:25:15.904 { 00:25:15.904 "name": "pt2", 00:25:15.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:15.904 "is_configured": true, 00:25:15.904 "data_offset": 256, 00:25:15.904 "data_size": 7936 00:25:15.904 } 00:25:15.904 ] 00:25:15.904 } 00:25:15.904 } 00:25:15.904 }' 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:15.904 pt2' 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.904 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.162 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:16.162 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:16.162 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:16.162 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.162 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.162 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:16.162 [2024-11-20 05:35:47.761071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:16.162 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8c7d935e-bac8-4f16-bbec-011527af58cc 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 8c7d935e-bac8-4f16-bbec-011527af58cc ']' 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.163 [2024-11-20 05:35:47.792764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:16.163 [2024-11-20 05:35:47.792887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:16.163 [2024-11-20 05:35:47.793014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:16.163 [2024-11-20 05:35:47.793090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:16.163 [2024-11-20 05:35:47.793159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.163 [2024-11-20 05:35:47.892830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:16.163 [2024-11-20 05:35:47.894783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:16.163 [2024-11-20 05:35:47.894935] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:16.163 [2024-11-20 05:35:47.894992] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:16.163 [2024-11-20 05:35:47.895006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:16.163 [2024-11-20 05:35:47.895017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:16.163 request: 00:25:16.163 { 00:25:16.163 "name": "raid_bdev1", 00:25:16.163 "raid_level": "raid1", 00:25:16.163 "base_bdevs": [ 00:25:16.163 "malloc1", 00:25:16.163 "malloc2" 00:25:16.163 ], 00:25:16.163 "superblock": false, 00:25:16.163 "method": "bdev_raid_create", 00:25:16.163 "req_id": 1 00:25:16.163 } 00:25:16.163 Got JSON-RPC error response 00:25:16.163 response: 00:25:16.163 { 00:25:16.163 "code": -17, 00:25:16.163 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:16.163 } 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.163 [2024-11-20 05:35:47.932813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:16.163 [2024-11-20 05:35:47.932935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.163 [2024-11-20 05:35:47.932994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:16.163 [2024-11-20 05:35:47.933009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.163 [2024-11-20 05:35:47.934902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.163 [2024-11-20 05:35:47.934936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:16.163 [2024-11-20 05:35:47.934981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:16.163 [2024-11-20 05:35:47.935035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:16.163 pt1 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.163 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.163 "name": "raid_bdev1", 00:25:16.163 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:16.163 "strip_size_kb": 0, 00:25:16.163 "state": "configuring", 00:25:16.163 "raid_level": "raid1", 00:25:16.163 "superblock": true, 00:25:16.163 "num_base_bdevs": 2, 00:25:16.163 "num_base_bdevs_discovered": 1, 00:25:16.163 "num_base_bdevs_operational": 2, 00:25:16.163 "base_bdevs_list": [ 00:25:16.163 { 00:25:16.164 "name": "pt1", 00:25:16.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:16.164 "is_configured": true, 00:25:16.164 "data_offset": 256, 00:25:16.164 "data_size": 7936 00:25:16.164 }, 00:25:16.164 { 00:25:16.164 "name": null, 00:25:16.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:16.164 "is_configured": false, 00:25:16.164 "data_offset": 256, 00:25:16.164 "data_size": 7936 00:25:16.164 } 00:25:16.164 ] 00:25:16.164 }' 00:25:16.164 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.164 05:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.422 [2024-11-20 05:35:48.240900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:16.422 [2024-11-20 05:35:48.240966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.422 [2024-11-20 05:35:48.240985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:16.422 [2024-11-20 05:35:48.240996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.422 [2024-11-20 05:35:48.241142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.422 [2024-11-20 05:35:48.241157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:16.422 [2024-11-20 05:35:48.241200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:16.422 [2024-11-20 05:35:48.241220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:16.422 [2024-11-20 05:35:48.241299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:16.422 [2024-11-20 05:35:48.241309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:16.422 [2024-11-20 05:35:48.241380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:16.422 [2024-11-20 05:35:48.241438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:16.422 [2024-11-20 05:35:48.241445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:16.422 [2024-11-20 05:35:48.241504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.422 pt2 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.422 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.681 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.681 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.681 "name": "raid_bdev1", 00:25:16.681 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:16.681 "strip_size_kb": 0, 00:25:16.681 "state": "online", 00:25:16.681 "raid_level": "raid1", 00:25:16.681 "superblock": true, 00:25:16.681 "num_base_bdevs": 2, 00:25:16.681 "num_base_bdevs_discovered": 2, 00:25:16.681 "num_base_bdevs_operational": 2, 00:25:16.681 "base_bdevs_list": [ 00:25:16.681 { 00:25:16.681 "name": "pt1", 00:25:16.681 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:16.681 "is_configured": true, 00:25:16.681 "data_offset": 256, 00:25:16.681 "data_size": 7936 00:25:16.681 }, 00:25:16.681 { 00:25:16.681 "name": "pt2", 00:25:16.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:16.681 "is_configured": true, 00:25:16.681 "data_offset": 256, 00:25:16.681 "data_size": 7936 00:25:16.681 } 00:25:16.681 ] 00:25:16.681 }' 00:25:16.681 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.681 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.940 [2024-11-20 05:35:48.537244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.940 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:16.940 "name": "raid_bdev1", 00:25:16.940 "aliases": [ 00:25:16.940 "8c7d935e-bac8-4f16-bbec-011527af58cc" 00:25:16.940 ], 00:25:16.940 "product_name": "Raid Volume", 00:25:16.940 "block_size": 4128, 00:25:16.940 "num_blocks": 7936, 00:25:16.940 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:16.940 "md_size": 32, 00:25:16.940 "md_interleave": true, 00:25:16.940 "dif_type": 0, 00:25:16.940 "assigned_rate_limits": { 00:25:16.940 "rw_ios_per_sec": 0, 00:25:16.940 "rw_mbytes_per_sec": 0, 00:25:16.940 "r_mbytes_per_sec": 0, 00:25:16.940 "w_mbytes_per_sec": 0 00:25:16.940 }, 00:25:16.940 "claimed": false, 00:25:16.940 "zoned": false, 00:25:16.940 "supported_io_types": { 00:25:16.940 "read": true, 00:25:16.940 "write": true, 00:25:16.940 "unmap": false, 00:25:16.940 "flush": false, 00:25:16.940 "reset": true, 00:25:16.940 "nvme_admin": false, 00:25:16.940 "nvme_io": false, 00:25:16.940 "nvme_io_md": false, 00:25:16.940 "write_zeroes": true, 00:25:16.940 "zcopy": false, 00:25:16.940 "get_zone_info": false, 00:25:16.940 "zone_management": false, 00:25:16.940 "zone_append": false, 00:25:16.940 "compare": false, 00:25:16.940 "compare_and_write": false, 00:25:16.940 "abort": false, 00:25:16.940 "seek_hole": false, 00:25:16.940 "seek_data": false, 00:25:16.940 "copy": false, 00:25:16.940 "nvme_iov_md": false 00:25:16.940 }, 00:25:16.940 "memory_domains": [ 00:25:16.940 { 00:25:16.940 "dma_device_id": "system", 00:25:16.940 "dma_device_type": 1 00:25:16.940 }, 00:25:16.940 { 00:25:16.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.940 "dma_device_type": 2 00:25:16.940 }, 00:25:16.940 { 00:25:16.940 "dma_device_id": "system", 00:25:16.941 "dma_device_type": 1 00:25:16.941 }, 00:25:16.941 { 00:25:16.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.941 "dma_device_type": 2 00:25:16.941 } 00:25:16.941 ], 00:25:16.941 "driver_specific": { 00:25:16.941 "raid": { 00:25:16.941 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:16.941 "strip_size_kb": 0, 00:25:16.941 "state": "online", 00:25:16.941 "raid_level": "raid1", 00:25:16.941 "superblock": true, 00:25:16.941 "num_base_bdevs": 2, 00:25:16.941 "num_base_bdevs_discovered": 2, 00:25:16.941 "num_base_bdevs_operational": 2, 00:25:16.941 "base_bdevs_list": [ 00:25:16.941 { 00:25:16.941 "name": "pt1", 00:25:16.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:16.941 "is_configured": true, 00:25:16.941 "data_offset": 256, 00:25:16.941 "data_size": 7936 00:25:16.941 }, 00:25:16.941 { 00:25:16.941 "name": "pt2", 00:25:16.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:16.941 "is_configured": true, 00:25:16.941 "data_offset": 256, 00:25:16.941 "data_size": 7936 00:25:16.941 } 00:25:16.941 ] 00:25:16.941 } 00:25:16.941 } 00:25:16.941 }' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:16.941 pt2' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.941 [2024-11-20 05:35:48.713299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 8c7d935e-bac8-4f16-bbec-011527af58cc '!=' 8c7d935e-bac8-4f16-bbec-011527af58cc ']' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.941 [2024-11-20 05:35:48.741055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.941 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.198 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:17.198 "name": "raid_bdev1", 00:25:17.198 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:17.198 "strip_size_kb": 0, 00:25:17.198 "state": "online", 00:25:17.198 "raid_level": "raid1", 00:25:17.198 "superblock": true, 00:25:17.198 "num_base_bdevs": 2, 00:25:17.198 "num_base_bdevs_discovered": 1, 00:25:17.198 "num_base_bdevs_operational": 1, 00:25:17.198 "base_bdevs_list": [ 00:25:17.198 { 00:25:17.198 "name": null, 00:25:17.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.198 "is_configured": false, 00:25:17.198 "data_offset": 0, 00:25:17.198 "data_size": 7936 00:25:17.198 }, 00:25:17.198 { 00:25:17.198 "name": "pt2", 00:25:17.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:17.198 "is_configured": true, 00:25:17.198 "data_offset": 256, 00:25:17.198 "data_size": 7936 00:25:17.198 } 00:25:17.198 ] 00:25:17.198 }' 00:25:17.198 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:17.198 05:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.457 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:17.457 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.457 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.457 [2024-11-20 05:35:49.077101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:17.457 [2024-11-20 05:35:49.077127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:17.457 [2024-11-20 05:35:49.077191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:17.457 [2024-11-20 05:35:49.077234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:17.457 [2024-11-20 05:35:49.077244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:17.457 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.457 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:17.457 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.457 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.458 [2024-11-20 05:35:49.133111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:17.458 [2024-11-20 05:35:49.133166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.458 [2024-11-20 05:35:49.133182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:17.458 [2024-11-20 05:35:49.133192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.458 [2024-11-20 05:35:49.135192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.458 [2024-11-20 05:35:49.135230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:17.458 [2024-11-20 05:35:49.135283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:17.458 [2024-11-20 05:35:49.135331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:17.458 [2024-11-20 05:35:49.135413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:17.458 [2024-11-20 05:35:49.135426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:17.458 [2024-11-20 05:35:49.135512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:17.458 [2024-11-20 05:35:49.135570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:17.458 [2024-11-20 05:35:49.135667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:17.458 [2024-11-20 05:35:49.135745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.458 pt2 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:17.458 "name": "raid_bdev1", 00:25:17.458 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:17.458 "strip_size_kb": 0, 00:25:17.458 "state": "online", 00:25:17.458 "raid_level": "raid1", 00:25:17.458 "superblock": true, 00:25:17.458 "num_base_bdevs": 2, 00:25:17.458 "num_base_bdevs_discovered": 1, 00:25:17.458 "num_base_bdevs_operational": 1, 00:25:17.458 "base_bdevs_list": [ 00:25:17.458 { 00:25:17.458 "name": null, 00:25:17.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.458 "is_configured": false, 00:25:17.458 "data_offset": 256, 00:25:17.458 "data_size": 7936 00:25:17.458 }, 00:25:17.458 { 00:25:17.458 "name": "pt2", 00:25:17.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:17.458 "is_configured": true, 00:25:17.458 "data_offset": 256, 00:25:17.458 "data_size": 7936 00:25:17.458 } 00:25:17.458 ] 00:25:17.458 }' 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:17.458 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.716 [2024-11-20 05:35:49.433161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:17.716 [2024-11-20 05:35:49.433188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:17.716 [2024-11-20 05:35:49.433244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:17.716 [2024-11-20 05:35:49.433290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:17.716 [2024-11-20 05:35:49.433299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.716 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.717 [2024-11-20 05:35:49.477204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:17.717 [2024-11-20 05:35:49.477261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.717 [2024-11-20 05:35:49.477279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:17.717 [2024-11-20 05:35:49.477288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.717 [2024-11-20 05:35:49.479253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.717 [2024-11-20 05:35:49.479442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:17.717 [2024-11-20 05:35:49.479507] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:17.717 [2024-11-20 05:35:49.479554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:17.717 [2024-11-20 05:35:49.479649] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:17.717 [2024-11-20 05:35:49.479659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:17.717 [2024-11-20 05:35:49.479676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:17.717 [2024-11-20 05:35:49.479725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:17.717 [2024-11-20 05:35:49.479788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:17.717 [2024-11-20 05:35:49.479797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:17.717 [2024-11-20 05:35:49.479861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:17.717 [2024-11-20 05:35:49.479918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:17.717 [2024-11-20 05:35:49.479928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:17.717 [2024-11-20 05:35:49.479993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.717 pt1 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:17.717 "name": "raid_bdev1", 00:25:17.717 "uuid": "8c7d935e-bac8-4f16-bbec-011527af58cc", 00:25:17.717 "strip_size_kb": 0, 00:25:17.717 "state": "online", 00:25:17.717 "raid_level": "raid1", 00:25:17.717 "superblock": true, 00:25:17.717 "num_base_bdevs": 2, 00:25:17.717 "num_base_bdevs_discovered": 1, 00:25:17.717 "num_base_bdevs_operational": 1, 00:25:17.717 "base_bdevs_list": [ 00:25:17.717 { 00:25:17.717 "name": null, 00:25:17.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.717 "is_configured": false, 00:25:17.717 "data_offset": 256, 00:25:17.717 "data_size": 7936 00:25:17.717 }, 00:25:17.717 { 00:25:17.717 "name": "pt2", 00:25:17.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:17.717 "is_configured": true, 00:25:17.717 "data_offset": 256, 00:25:17.717 "data_size": 7936 00:25:17.717 } 00:25:17.717 ] 00:25:17.717 }' 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:17.717 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.976 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:17.976 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.976 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.976 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:17.976 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:18.252 [2024-11-20 05:35:49.833471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 8c7d935e-bac8-4f16-bbec-011527af58cc '!=' 8c7d935e-bac8-4f16-bbec-011527af58cc ']' 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 86188 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 86188 ']' 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 86188 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86188 00:25:18.252 killing process with pid 86188 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86188' 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 86188 00:25:18.252 [2024-11-20 05:35:49.892403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:18.252 [2024-11-20 05:35:49.892475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:18.252 [2024-11-20 05:35:49.892514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:18.252 [2024-11-20 05:35:49.892525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:18.252 05:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 86188 00:25:18.252 [2024-11-20 05:35:49.994958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:18.855 05:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:25:18.855 ************************************ 00:25:18.855 END TEST raid_superblock_test_md_interleaved 00:25:18.855 ************************************ 00:25:18.855 00:25:18.855 real 0m4.286s 00:25:18.855 user 0m6.629s 00:25:18.855 sys 0m0.671s 00:25:18.855 05:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:18.855 05:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:18.855 05:35:50 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:25:18.855 05:35:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:25:18.855 05:35:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:18.855 05:35:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:18.855 ************************************ 00:25:18.855 START TEST raid_rebuild_test_sb_md_interleaved 00:25:18.855 ************************************ 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:18.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=86494 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 86494 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 86494 ']' 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:18.855 05:35:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:18.855 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:18.855 Zero copy mechanism will not be used. 00:25:18.855 [2024-11-20 05:35:50.669627] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:18.855 [2024-11-20 05:35:50.669741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86494 ] 00:25:19.113 [2024-11-20 05:35:50.823996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.113 [2024-11-20 05:35:50.908796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.371 [2024-11-20 05:35:51.020556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:19.371 [2024-11-20 05:35:51.020594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.937 BaseBdev1_malloc 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.937 [2024-11-20 05:35:51.508278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:19.937 [2024-11-20 05:35:51.508448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.937 [2024-11-20 05:35:51.508473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:19.937 [2024-11-20 05:35:51.508483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.937 [2024-11-20 05:35:51.510074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.937 [2024-11-20 05:35:51.510107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:19.937 BaseBdev1 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.937 BaseBdev2_malloc 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.937 [2024-11-20 05:35:51.539936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:19.937 [2024-11-20 05:35:51.540079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.937 [2024-11-20 05:35:51.540100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:19.937 [2024-11-20 05:35:51.540109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.937 [2024-11-20 05:35:51.541651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.937 [2024-11-20 05:35:51.541675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:19.937 BaseBdev2 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.937 spare_malloc 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.937 spare_delay 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.937 [2024-11-20 05:35:51.593041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:19.937 [2024-11-20 05:35:51.593094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.937 [2024-11-20 05:35:51.593111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:19.937 [2024-11-20 05:35:51.593121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.937 [2024-11-20 05:35:51.594724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.937 [2024-11-20 05:35:51.594755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:19.937 spare 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.937 [2024-11-20 05:35:51.601082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:19.937 [2024-11-20 05:35:51.602609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:19.937 [2024-11-20 05:35:51.602756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:19.937 [2024-11-20 05:35:51.602767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:19.937 [2024-11-20 05:35:51.602832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:19.937 [2024-11-20 05:35:51.602891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:19.937 [2024-11-20 05:35:51.602897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:19.937 [2024-11-20 05:35:51.602955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:19.937 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.938 "name": "raid_bdev1", 00:25:19.938 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:19.938 "strip_size_kb": 0, 00:25:19.938 "state": "online", 00:25:19.938 "raid_level": "raid1", 00:25:19.938 "superblock": true, 00:25:19.938 "num_base_bdevs": 2, 00:25:19.938 "num_base_bdevs_discovered": 2, 00:25:19.938 "num_base_bdevs_operational": 2, 00:25:19.938 "base_bdevs_list": [ 00:25:19.938 { 00:25:19.938 "name": "BaseBdev1", 00:25:19.938 "uuid": "6bf0844a-6b4f-5560-a1df-a8cd95aa17be", 00:25:19.938 "is_configured": true, 00:25:19.938 "data_offset": 256, 00:25:19.938 "data_size": 7936 00:25:19.938 }, 00:25:19.938 { 00:25:19.938 "name": "BaseBdev2", 00:25:19.938 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:19.938 "is_configured": true, 00:25:19.938 "data_offset": 256, 00:25:19.938 "data_size": 7936 00:25:19.938 } 00:25:19.938 ] 00:25:19.938 }' 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.938 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.196 [2024-11-20 05:35:51.897379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.196 [2024-11-20 05:35:51.961124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.196 "name": "raid_bdev1", 00:25:20.196 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:20.196 "strip_size_kb": 0, 00:25:20.196 "state": "online", 00:25:20.196 "raid_level": "raid1", 00:25:20.196 "superblock": true, 00:25:20.196 "num_base_bdevs": 2, 00:25:20.196 "num_base_bdevs_discovered": 1, 00:25:20.196 "num_base_bdevs_operational": 1, 00:25:20.196 "base_bdevs_list": [ 00:25:20.196 { 00:25:20.196 "name": null, 00:25:20.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.196 "is_configured": false, 00:25:20.196 "data_offset": 0, 00:25:20.196 "data_size": 7936 00:25:20.196 }, 00:25:20.196 { 00:25:20.196 "name": "BaseBdev2", 00:25:20.196 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:20.196 "is_configured": true, 00:25:20.196 "data_offset": 256, 00:25:20.196 "data_size": 7936 00:25:20.196 } 00:25:20.196 ] 00:25:20.196 }' 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.196 05:35:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.454 05:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:20.454 05:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.454 05:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.454 [2024-11-20 05:35:52.253199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:20.454 [2024-11-20 05:35:52.262732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:20.454 05:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.454 05:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:20.454 [2024-11-20 05:35:52.264282] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:21.827 "name": "raid_bdev1", 00:25:21.827 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:21.827 "strip_size_kb": 0, 00:25:21.827 "state": "online", 00:25:21.827 "raid_level": "raid1", 00:25:21.827 "superblock": true, 00:25:21.827 "num_base_bdevs": 2, 00:25:21.827 "num_base_bdevs_discovered": 2, 00:25:21.827 "num_base_bdevs_operational": 2, 00:25:21.827 "process": { 00:25:21.827 "type": "rebuild", 00:25:21.827 "target": "spare", 00:25:21.827 "progress": { 00:25:21.827 "blocks": 2560, 00:25:21.827 "percent": 32 00:25:21.827 } 00:25:21.827 }, 00:25:21.827 "base_bdevs_list": [ 00:25:21.827 { 00:25:21.827 "name": "spare", 00:25:21.827 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:21.827 "is_configured": true, 00:25:21.827 "data_offset": 256, 00:25:21.827 "data_size": 7936 00:25:21.827 }, 00:25:21.827 { 00:25:21.827 "name": "BaseBdev2", 00:25:21.827 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:21.827 "is_configured": true, 00:25:21.827 "data_offset": 256, 00:25:21.827 "data_size": 7936 00:25:21.827 } 00:25:21.827 ] 00:25:21.827 }' 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.827 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.827 [2024-11-20 05:35:53.378706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:21.827 [2024-11-20 05:35:53.469944] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:21.828 [2024-11-20 05:35:53.470021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.828 [2024-11-20 05:35:53.470035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:21.828 [2024-11-20 05:35:53.470045] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.828 "name": "raid_bdev1", 00:25:21.828 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:21.828 "strip_size_kb": 0, 00:25:21.828 "state": "online", 00:25:21.828 "raid_level": "raid1", 00:25:21.828 "superblock": true, 00:25:21.828 "num_base_bdevs": 2, 00:25:21.828 "num_base_bdevs_discovered": 1, 00:25:21.828 "num_base_bdevs_operational": 1, 00:25:21.828 "base_bdevs_list": [ 00:25:21.828 { 00:25:21.828 "name": null, 00:25:21.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.828 "is_configured": false, 00:25:21.828 "data_offset": 0, 00:25:21.828 "data_size": 7936 00:25:21.828 }, 00:25:21.828 { 00:25:21.828 "name": "BaseBdev2", 00:25:21.828 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:21.828 "is_configured": true, 00:25:21.828 "data_offset": 256, 00:25:21.828 "data_size": 7936 00:25:21.828 } 00:25:21.828 ] 00:25:21.828 }' 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.828 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:22.087 "name": "raid_bdev1", 00:25:22.087 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:22.087 "strip_size_kb": 0, 00:25:22.087 "state": "online", 00:25:22.087 "raid_level": "raid1", 00:25:22.087 "superblock": true, 00:25:22.087 "num_base_bdevs": 2, 00:25:22.087 "num_base_bdevs_discovered": 1, 00:25:22.087 "num_base_bdevs_operational": 1, 00:25:22.087 "base_bdevs_list": [ 00:25:22.087 { 00:25:22.087 "name": null, 00:25:22.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.087 "is_configured": false, 00:25:22.087 "data_offset": 0, 00:25:22.087 "data_size": 7936 00:25:22.087 }, 00:25:22.087 { 00:25:22.087 "name": "BaseBdev2", 00:25:22.087 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:22.087 "is_configured": true, 00:25:22.087 "data_offset": 256, 00:25:22.087 "data_size": 7936 00:25:22.087 } 00:25:22.087 ] 00:25:22.087 }' 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 [2024-11-20 05:35:53.888960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:22.087 [2024-11-20 05:35:53.898382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.087 05:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:22.087 [2024-11-20 05:35:53.900090] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:23.460 "name": "raid_bdev1", 00:25:23.460 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:23.460 "strip_size_kb": 0, 00:25:23.460 "state": "online", 00:25:23.460 "raid_level": "raid1", 00:25:23.460 "superblock": true, 00:25:23.460 "num_base_bdevs": 2, 00:25:23.460 "num_base_bdevs_discovered": 2, 00:25:23.460 "num_base_bdevs_operational": 2, 00:25:23.460 "process": { 00:25:23.460 "type": "rebuild", 00:25:23.460 "target": "spare", 00:25:23.460 "progress": { 00:25:23.460 "blocks": 2560, 00:25:23.460 "percent": 32 00:25:23.460 } 00:25:23.460 }, 00:25:23.460 "base_bdevs_list": [ 00:25:23.460 { 00:25:23.460 "name": "spare", 00:25:23.460 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:23.460 "is_configured": true, 00:25:23.460 "data_offset": 256, 00:25:23.460 "data_size": 7936 00:25:23.460 }, 00:25:23.460 { 00:25:23.460 "name": "BaseBdev2", 00:25:23.460 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:23.460 "is_configured": true, 00:25:23.460 "data_offset": 256, 00:25:23.460 "data_size": 7936 00:25:23.460 } 00:25:23.460 ] 00:25:23.460 }' 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:23.460 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:23.461 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=589 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:23.461 05:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:23.461 "name": "raid_bdev1", 00:25:23.461 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:23.461 "strip_size_kb": 0, 00:25:23.461 "state": "online", 00:25:23.461 "raid_level": "raid1", 00:25:23.461 "superblock": true, 00:25:23.461 "num_base_bdevs": 2, 00:25:23.461 "num_base_bdevs_discovered": 2, 00:25:23.461 "num_base_bdevs_operational": 2, 00:25:23.461 "process": { 00:25:23.461 "type": "rebuild", 00:25:23.461 "target": "spare", 00:25:23.461 "progress": { 00:25:23.461 "blocks": 2816, 00:25:23.461 "percent": 35 00:25:23.461 } 00:25:23.461 }, 00:25:23.461 "base_bdevs_list": [ 00:25:23.461 { 00:25:23.461 "name": "spare", 00:25:23.461 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:23.461 "is_configured": true, 00:25:23.461 "data_offset": 256, 00:25:23.461 "data_size": 7936 00:25:23.461 }, 00:25:23.461 { 00:25:23.461 "name": "BaseBdev2", 00:25:23.461 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:23.461 "is_configured": true, 00:25:23.461 "data_offset": 256, 00:25:23.461 "data_size": 7936 00:25:23.461 } 00:25:23.461 ] 00:25:23.461 }' 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.461 05:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:24.395 "name": "raid_bdev1", 00:25:24.395 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:24.395 "strip_size_kb": 0, 00:25:24.395 "state": "online", 00:25:24.395 "raid_level": "raid1", 00:25:24.395 "superblock": true, 00:25:24.395 "num_base_bdevs": 2, 00:25:24.395 "num_base_bdevs_discovered": 2, 00:25:24.395 "num_base_bdevs_operational": 2, 00:25:24.395 "process": { 00:25:24.395 "type": "rebuild", 00:25:24.395 "target": "spare", 00:25:24.395 "progress": { 00:25:24.395 "blocks": 5376, 00:25:24.395 "percent": 67 00:25:24.395 } 00:25:24.395 }, 00:25:24.395 "base_bdevs_list": [ 00:25:24.395 { 00:25:24.395 "name": "spare", 00:25:24.395 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:24.395 "is_configured": true, 00:25:24.395 "data_offset": 256, 00:25:24.395 "data_size": 7936 00:25:24.395 }, 00:25:24.395 { 00:25:24.395 "name": "BaseBdev2", 00:25:24.395 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:24.395 "is_configured": true, 00:25:24.395 "data_offset": 256, 00:25:24.395 "data_size": 7936 00:25:24.395 } 00:25:24.395 ] 00:25:24.395 }' 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:24.395 05:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:25.422 [2024-11-20 05:35:57.014110] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:25.422 [2024-11-20 05:35:57.014304] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:25.422 [2024-11-20 05:35:57.014420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.422 "name": "raid_bdev1", 00:25:25.422 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:25.422 "strip_size_kb": 0, 00:25:25.422 "state": "online", 00:25:25.422 "raid_level": "raid1", 00:25:25.422 "superblock": true, 00:25:25.422 "num_base_bdevs": 2, 00:25:25.422 "num_base_bdevs_discovered": 2, 00:25:25.422 "num_base_bdevs_operational": 2, 00:25:25.422 "base_bdevs_list": [ 00:25:25.422 { 00:25:25.422 "name": "spare", 00:25:25.422 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:25.422 "is_configured": true, 00:25:25.422 "data_offset": 256, 00:25:25.422 "data_size": 7936 00:25:25.422 }, 00:25:25.422 { 00:25:25.422 "name": "BaseBdev2", 00:25:25.422 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:25.422 "is_configured": true, 00:25:25.422 "data_offset": 256, 00:25:25.422 "data_size": 7936 00:25:25.422 } 00:25:25.422 ] 00:25:25.422 }' 00:25:25.422 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:25.680 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.681 "name": "raid_bdev1", 00:25:25.681 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:25.681 "strip_size_kb": 0, 00:25:25.681 "state": "online", 00:25:25.681 "raid_level": "raid1", 00:25:25.681 "superblock": true, 00:25:25.681 "num_base_bdevs": 2, 00:25:25.681 "num_base_bdevs_discovered": 2, 00:25:25.681 "num_base_bdevs_operational": 2, 00:25:25.681 "base_bdevs_list": [ 00:25:25.681 { 00:25:25.681 "name": "spare", 00:25:25.681 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:25.681 "is_configured": true, 00:25:25.681 "data_offset": 256, 00:25:25.681 "data_size": 7936 00:25:25.681 }, 00:25:25.681 { 00:25:25.681 "name": "BaseBdev2", 00:25:25.681 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:25.681 "is_configured": true, 00:25:25.681 "data_offset": 256, 00:25:25.681 "data_size": 7936 00:25:25.681 } 00:25:25.681 ] 00:25:25.681 }' 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.681 "name": "raid_bdev1", 00:25:25.681 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:25.681 "strip_size_kb": 0, 00:25:25.681 "state": "online", 00:25:25.681 "raid_level": "raid1", 00:25:25.681 "superblock": true, 00:25:25.681 "num_base_bdevs": 2, 00:25:25.681 "num_base_bdevs_discovered": 2, 00:25:25.681 "num_base_bdevs_operational": 2, 00:25:25.681 "base_bdevs_list": [ 00:25:25.681 { 00:25:25.681 "name": "spare", 00:25:25.681 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:25.681 "is_configured": true, 00:25:25.681 "data_offset": 256, 00:25:25.681 "data_size": 7936 00:25:25.681 }, 00:25:25.681 { 00:25:25.681 "name": "BaseBdev2", 00:25:25.681 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:25.681 "is_configured": true, 00:25:25.681 "data_offset": 256, 00:25:25.681 "data_size": 7936 00:25:25.681 } 00:25:25.681 ] 00:25:25.681 }' 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.681 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.938 [2024-11-20 05:35:57.733261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:25.938 [2024-11-20 05:35:57.733404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:25.938 [2024-11-20 05:35:57.733483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.938 [2024-11-20 05:35:57.733549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.938 [2024-11-20 05:35:57.733557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.938 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.196 [2024-11-20 05:35:57.781268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:26.196 [2024-11-20 05:35:57.781455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.196 [2024-11-20 05:35:57.781492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:26.196 [2024-11-20 05:35:57.781560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.196 [2024-11-20 05:35:57.783247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.196 [2024-11-20 05:35:57.783354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:26.196 [2024-11-20 05:35:57.783466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:26.196 [2024-11-20 05:35:57.783510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:26.196 [2024-11-20 05:35:57.783599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:26.196 spare 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.196 [2024-11-20 05:35:57.883674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:26.196 [2024-11-20 05:35:57.883839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:26.196 [2024-11-20 05:35:57.883959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:26.196 [2024-11-20 05:35:57.884090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:26.196 [2024-11-20 05:35:57.884139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:26.196 [2024-11-20 05:35:57.884327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.196 "name": "raid_bdev1", 00:25:26.196 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:26.196 "strip_size_kb": 0, 00:25:26.196 "state": "online", 00:25:26.196 "raid_level": "raid1", 00:25:26.196 "superblock": true, 00:25:26.196 "num_base_bdevs": 2, 00:25:26.196 "num_base_bdevs_discovered": 2, 00:25:26.196 "num_base_bdevs_operational": 2, 00:25:26.196 "base_bdevs_list": [ 00:25:26.196 { 00:25:26.196 "name": "spare", 00:25:26.196 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:26.196 "is_configured": true, 00:25:26.196 "data_offset": 256, 00:25:26.196 "data_size": 7936 00:25:26.196 }, 00:25:26.196 { 00:25:26.196 "name": "BaseBdev2", 00:25:26.196 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:26.196 "is_configured": true, 00:25:26.196 "data_offset": 256, 00:25:26.196 "data_size": 7936 00:25:26.196 } 00:25:26.196 ] 00:25:26.196 }' 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.196 05:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:26.453 "name": "raid_bdev1", 00:25:26.453 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:26.453 "strip_size_kb": 0, 00:25:26.453 "state": "online", 00:25:26.453 "raid_level": "raid1", 00:25:26.453 "superblock": true, 00:25:26.453 "num_base_bdevs": 2, 00:25:26.453 "num_base_bdevs_discovered": 2, 00:25:26.453 "num_base_bdevs_operational": 2, 00:25:26.453 "base_bdevs_list": [ 00:25:26.453 { 00:25:26.453 "name": "spare", 00:25:26.453 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:26.453 "is_configured": true, 00:25:26.453 "data_offset": 256, 00:25:26.453 "data_size": 7936 00:25:26.453 }, 00:25:26.453 { 00:25:26.453 "name": "BaseBdev2", 00:25:26.453 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:26.453 "is_configured": true, 00:25:26.453 "data_offset": 256, 00:25:26.453 "data_size": 7936 00:25:26.453 } 00:25:26.453 ] 00:25:26.453 }' 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:26.453 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.710 [2024-11-20 05:35:58.341433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.710 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.711 "name": "raid_bdev1", 00:25:26.711 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:26.711 "strip_size_kb": 0, 00:25:26.711 "state": "online", 00:25:26.711 "raid_level": "raid1", 00:25:26.711 "superblock": true, 00:25:26.711 "num_base_bdevs": 2, 00:25:26.711 "num_base_bdevs_discovered": 1, 00:25:26.711 "num_base_bdevs_operational": 1, 00:25:26.711 "base_bdevs_list": [ 00:25:26.711 { 00:25:26.711 "name": null, 00:25:26.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.711 "is_configured": false, 00:25:26.711 "data_offset": 0, 00:25:26.711 "data_size": 7936 00:25:26.711 }, 00:25:26.711 { 00:25:26.711 "name": "BaseBdev2", 00:25:26.711 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:26.711 "is_configured": true, 00:25:26.711 "data_offset": 256, 00:25:26.711 "data_size": 7936 00:25:26.711 } 00:25:26.711 ] 00:25:26.711 }' 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.711 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.968 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:26.968 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.968 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.969 [2024-11-20 05:35:58.649542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:26.969 [2024-11-20 05:35:58.649833] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:26.969 [2024-11-20 05:35:58.649852] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:26.969 [2024-11-20 05:35:58.649892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:26.969 [2024-11-20 05:35:58.658718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:26.969 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.969 05:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:26.969 [2024-11-20 05:35:58.660235] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:27.899 "name": "raid_bdev1", 00:25:27.899 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:27.899 "strip_size_kb": 0, 00:25:27.899 "state": "online", 00:25:27.899 "raid_level": "raid1", 00:25:27.899 "superblock": true, 00:25:27.899 "num_base_bdevs": 2, 00:25:27.899 "num_base_bdevs_discovered": 2, 00:25:27.899 "num_base_bdevs_operational": 2, 00:25:27.899 "process": { 00:25:27.899 "type": "rebuild", 00:25:27.899 "target": "spare", 00:25:27.899 "progress": { 00:25:27.899 "blocks": 2560, 00:25:27.899 "percent": 32 00:25:27.899 } 00:25:27.899 }, 00:25:27.899 "base_bdevs_list": [ 00:25:27.899 { 00:25:27.899 "name": "spare", 00:25:27.899 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:27.899 "is_configured": true, 00:25:27.899 "data_offset": 256, 00:25:27.899 "data_size": 7936 00:25:27.899 }, 00:25:27.899 { 00:25:27.899 "name": "BaseBdev2", 00:25:27.899 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:27.899 "is_configured": true, 00:25:27.899 "data_offset": 256, 00:25:27.899 "data_size": 7936 00:25:27.899 } 00:25:27.899 ] 00:25:27.899 }' 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:27.899 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.157 [2024-11-20 05:35:59.762486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:28.157 [2024-11-20 05:35:59.765260] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:28.157 [2024-11-20 05:35:59.765313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:28.157 [2024-11-20 05:35:59.765325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:28.157 [2024-11-20 05:35:59.765334] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.157 "name": "raid_bdev1", 00:25:28.157 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:28.157 "strip_size_kb": 0, 00:25:28.157 "state": "online", 00:25:28.157 "raid_level": "raid1", 00:25:28.157 "superblock": true, 00:25:28.157 "num_base_bdevs": 2, 00:25:28.157 "num_base_bdevs_discovered": 1, 00:25:28.157 "num_base_bdevs_operational": 1, 00:25:28.157 "base_bdevs_list": [ 00:25:28.157 { 00:25:28.157 "name": null, 00:25:28.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.157 "is_configured": false, 00:25:28.157 "data_offset": 0, 00:25:28.157 "data_size": 7936 00:25:28.157 }, 00:25:28.157 { 00:25:28.157 "name": "BaseBdev2", 00:25:28.157 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:28.157 "is_configured": true, 00:25:28.157 "data_offset": 256, 00:25:28.157 "data_size": 7936 00:25:28.157 } 00:25:28.157 ] 00:25:28.157 }' 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.157 05:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.418 05:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:28.418 05:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.418 05:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.418 [2024-11-20 05:36:00.079955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:28.418 [2024-11-20 05:36:00.080126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.418 [2024-11-20 05:36:00.080152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:28.418 [2024-11-20 05:36:00.080162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.418 [2024-11-20 05:36:00.080326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.418 [2024-11-20 05:36:00.080338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:28.418 [2024-11-20 05:36:00.080400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:28.418 [2024-11-20 05:36:00.080412] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:28.418 [2024-11-20 05:36:00.080420] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:28.418 [2024-11-20 05:36:00.080438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:28.418 [2024-11-20 05:36:00.089382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:28.418 spare 00:25:28.418 05:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.418 05:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:28.418 [2024-11-20 05:36:00.090966] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:29.354 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:29.354 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:29.354 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:29.354 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:29.354 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:29.354 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.355 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.355 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.355 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.355 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.355 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:29.355 "name": "raid_bdev1", 00:25:29.355 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:29.355 "strip_size_kb": 0, 00:25:29.355 "state": "online", 00:25:29.355 "raid_level": "raid1", 00:25:29.355 "superblock": true, 00:25:29.355 "num_base_bdevs": 2, 00:25:29.355 "num_base_bdevs_discovered": 2, 00:25:29.355 "num_base_bdevs_operational": 2, 00:25:29.355 "process": { 00:25:29.355 "type": "rebuild", 00:25:29.355 "target": "spare", 00:25:29.355 "progress": { 00:25:29.355 "blocks": 2560, 00:25:29.355 "percent": 32 00:25:29.355 } 00:25:29.355 }, 00:25:29.355 "base_bdevs_list": [ 00:25:29.355 { 00:25:29.355 "name": "spare", 00:25:29.355 "uuid": "749c8fa7-69a3-5d53-b874-94c51c834096", 00:25:29.355 "is_configured": true, 00:25:29.355 "data_offset": 256, 00:25:29.355 "data_size": 7936 00:25:29.355 }, 00:25:29.355 { 00:25:29.355 "name": "BaseBdev2", 00:25:29.355 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:29.355 "is_configured": true, 00:25:29.355 "data_offset": 256, 00:25:29.355 "data_size": 7936 00:25:29.355 } 00:25:29.355 ] 00:25:29.355 }' 00:25:29.355 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:29.355 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:29.355 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.612 [2024-11-20 05:36:01.205360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.612 [2024-11-20 05:36:01.296655] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:29.612 [2024-11-20 05:36:01.296726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.612 [2024-11-20 05:36:01.296742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.612 [2024-11-20 05:36:01.296748] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.612 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.612 "name": "raid_bdev1", 00:25:29.612 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:29.612 "strip_size_kb": 0, 00:25:29.612 "state": "online", 00:25:29.612 "raid_level": "raid1", 00:25:29.612 "superblock": true, 00:25:29.612 "num_base_bdevs": 2, 00:25:29.612 "num_base_bdevs_discovered": 1, 00:25:29.612 "num_base_bdevs_operational": 1, 00:25:29.612 "base_bdevs_list": [ 00:25:29.612 { 00:25:29.612 "name": null, 00:25:29.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.612 "is_configured": false, 00:25:29.613 "data_offset": 0, 00:25:29.613 "data_size": 7936 00:25:29.613 }, 00:25:29.613 { 00:25:29.613 "name": "BaseBdev2", 00:25:29.613 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:29.613 "is_configured": true, 00:25:29.613 "data_offset": 256, 00:25:29.613 "data_size": 7936 00:25:29.613 } 00:25:29.613 ] 00:25:29.613 }' 00:25:29.613 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.613 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.870 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:29.870 "name": "raid_bdev1", 00:25:29.870 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:29.870 "strip_size_kb": 0, 00:25:29.870 "state": "online", 00:25:29.870 "raid_level": "raid1", 00:25:29.870 "superblock": true, 00:25:29.870 "num_base_bdevs": 2, 00:25:29.870 "num_base_bdevs_discovered": 1, 00:25:29.870 "num_base_bdevs_operational": 1, 00:25:29.870 "base_bdevs_list": [ 00:25:29.870 { 00:25:29.871 "name": null, 00:25:29.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.871 "is_configured": false, 00:25:29.871 "data_offset": 0, 00:25:29.871 "data_size": 7936 00:25:29.871 }, 00:25:29.871 { 00:25:29.871 "name": "BaseBdev2", 00:25:29.871 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:29.871 "is_configured": true, 00:25:29.871 "data_offset": 256, 00:25:29.871 "data_size": 7936 00:25:29.871 } 00:25:29.871 ] 00:25:29.871 }' 00:25:29.871 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:29.871 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:29.871 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.129 [2024-11-20 05:36:01.723443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:30.129 [2024-11-20 05:36:01.723499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.129 [2024-11-20 05:36:01.723520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:30.129 [2024-11-20 05:36:01.723528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.129 [2024-11-20 05:36:01.723672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.129 [2024-11-20 05:36:01.723682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:30.129 [2024-11-20 05:36:01.723725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:30.129 [2024-11-20 05:36:01.723736] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:30.129 [2024-11-20 05:36:01.723744] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:30.129 [2024-11-20 05:36:01.723752] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:30.129 BaseBdev1 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.129 05:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.126 "name": "raid_bdev1", 00:25:31.126 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:31.126 "strip_size_kb": 0, 00:25:31.126 "state": "online", 00:25:31.126 "raid_level": "raid1", 00:25:31.126 "superblock": true, 00:25:31.126 "num_base_bdevs": 2, 00:25:31.126 "num_base_bdevs_discovered": 1, 00:25:31.126 "num_base_bdevs_operational": 1, 00:25:31.126 "base_bdevs_list": [ 00:25:31.126 { 00:25:31.126 "name": null, 00:25:31.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.126 "is_configured": false, 00:25:31.126 "data_offset": 0, 00:25:31.126 "data_size": 7936 00:25:31.126 }, 00:25:31.126 { 00:25:31.126 "name": "BaseBdev2", 00:25:31.126 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:31.126 "is_configured": true, 00:25:31.126 "data_offset": 256, 00:25:31.126 "data_size": 7936 00:25:31.126 } 00:25:31.126 ] 00:25:31.126 }' 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.126 05:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:31.384 "name": "raid_bdev1", 00:25:31.384 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:31.384 "strip_size_kb": 0, 00:25:31.384 "state": "online", 00:25:31.384 "raid_level": "raid1", 00:25:31.384 "superblock": true, 00:25:31.384 "num_base_bdevs": 2, 00:25:31.384 "num_base_bdevs_discovered": 1, 00:25:31.384 "num_base_bdevs_operational": 1, 00:25:31.384 "base_bdevs_list": [ 00:25:31.384 { 00:25:31.384 "name": null, 00:25:31.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.384 "is_configured": false, 00:25:31.384 "data_offset": 0, 00:25:31.384 "data_size": 7936 00:25:31.384 }, 00:25:31.384 { 00:25:31.384 "name": "BaseBdev2", 00:25:31.384 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:31.384 "is_configured": true, 00:25:31.384 "data_offset": 256, 00:25:31.384 "data_size": 7936 00:25:31.384 } 00:25:31.384 ] 00:25:31.384 }' 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.384 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.384 [2024-11-20 05:36:03.163753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.385 [2024-11-20 05:36:03.163875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:31.385 [2024-11-20 05:36:03.163889] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:31.385 request: 00:25:31.385 { 00:25:31.385 "base_bdev": "BaseBdev1", 00:25:31.385 "raid_bdev": "raid_bdev1", 00:25:31.385 "method": "bdev_raid_add_base_bdev", 00:25:31.385 "req_id": 1 00:25:31.385 } 00:25:31.385 Got JSON-RPC error response 00:25:31.385 response: 00:25:31.385 { 00:25:31.385 "code": -22, 00:25:31.385 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:31.385 } 00:25:31.385 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:31.385 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:25:31.385 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:31.385 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:31.385 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:31.385 05:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.758 "name": "raid_bdev1", 00:25:32.758 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:32.758 "strip_size_kb": 0, 00:25:32.758 "state": "online", 00:25:32.758 "raid_level": "raid1", 00:25:32.758 "superblock": true, 00:25:32.758 "num_base_bdevs": 2, 00:25:32.758 "num_base_bdevs_discovered": 1, 00:25:32.758 "num_base_bdevs_operational": 1, 00:25:32.758 "base_bdevs_list": [ 00:25:32.758 { 00:25:32.758 "name": null, 00:25:32.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.758 "is_configured": false, 00:25:32.758 "data_offset": 0, 00:25:32.758 "data_size": 7936 00:25:32.758 }, 00:25:32.758 { 00:25:32.758 "name": "BaseBdev2", 00:25:32.758 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:32.758 "is_configured": true, 00:25:32.758 "data_offset": 256, 00:25:32.758 "data_size": 7936 00:25:32.758 } 00:25:32.758 ] 00:25:32.758 }' 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:32.758 "name": "raid_bdev1", 00:25:32.758 "uuid": "bca4ae38-4761-405d-ab9c-038d7bb5750d", 00:25:32.758 "strip_size_kb": 0, 00:25:32.758 "state": "online", 00:25:32.758 "raid_level": "raid1", 00:25:32.758 "superblock": true, 00:25:32.758 "num_base_bdevs": 2, 00:25:32.758 "num_base_bdevs_discovered": 1, 00:25:32.758 "num_base_bdevs_operational": 1, 00:25:32.758 "base_bdevs_list": [ 00:25:32.758 { 00:25:32.758 "name": null, 00:25:32.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.758 "is_configured": false, 00:25:32.758 "data_offset": 0, 00:25:32.758 "data_size": 7936 00:25:32.758 }, 00:25:32.758 { 00:25:32.758 "name": "BaseBdev2", 00:25:32.758 "uuid": "88eb2e92-4c3b-57d6-9f79-2e559b93f100", 00:25:32.758 "is_configured": true, 00:25:32.758 "data_offset": 256, 00:25:32.758 "data_size": 7936 00:25:32.758 } 00:25:32.758 ] 00:25:32.758 }' 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 86494 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 86494 ']' 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 86494 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:32.758 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86494 00:25:33.018 killing process with pid 86494 00:25:33.018 Received shutdown signal, test time was about 60.000000 seconds 00:25:33.018 00:25:33.018 Latency(us) 00:25:33.018 [2024-11-20T05:36:04.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.018 [2024-11-20T05:36:04.853Z] =================================================================================================================== 00:25:33.018 [2024-11-20T05:36:04.853Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:33.018 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:33.018 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:33.018 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86494' 00:25:33.018 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 86494 00:25:33.018 [2024-11-20 05:36:04.603971] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:33.018 05:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 86494 00:25:33.018 [2024-11-20 05:36:04.604083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:33.018 [2024-11-20 05:36:04.604122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:33.018 [2024-11-20 05:36:04.604132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:33.018 [2024-11-20 05:36:04.755368] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:33.617 05:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:25:33.617 00:25:33.617 real 0m14.715s 00:25:33.617 user 0m18.591s 00:25:33.617 sys 0m1.091s 00:25:33.617 ************************************ 00:25:33.618 END TEST raid_rebuild_test_sb_md_interleaved 00:25:33.618 ************************************ 00:25:33.618 05:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:33.618 05:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:33.618 05:36:05 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:25:33.618 05:36:05 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:25:33.618 05:36:05 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 86494 ']' 00:25:33.618 05:36:05 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 86494 00:25:33.618 05:36:05 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:25:33.618 00:25:33.618 real 9m29.425s 00:25:33.618 user 12m37.861s 00:25:33.618 sys 1m22.850s 00:25:33.618 05:36:05 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:33.618 ************************************ 00:25:33.618 END TEST bdev_raid 00:25:33.618 ************************************ 00:25:33.618 05:36:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:33.618 05:36:05 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:33.618 05:36:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:33.618 05:36:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:33.618 05:36:05 -- common/autotest_common.sh@10 -- # set +x 00:25:33.618 ************************************ 00:25:33.618 START TEST spdkcli_raid 00:25:33.618 ************************************ 00:25:33.618 05:36:05 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:33.877 * Looking for test storage... 00:25:33.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.877 05:36:05 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:33.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.877 --rc genhtml_branch_coverage=1 00:25:33.877 --rc genhtml_function_coverage=1 00:25:33.877 --rc genhtml_legend=1 00:25:33.877 --rc geninfo_all_blocks=1 00:25:33.877 --rc geninfo_unexecuted_blocks=1 00:25:33.877 00:25:33.877 ' 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:33.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.877 --rc genhtml_branch_coverage=1 00:25:33.877 --rc genhtml_function_coverage=1 00:25:33.877 --rc genhtml_legend=1 00:25:33.877 --rc geninfo_all_blocks=1 00:25:33.877 --rc geninfo_unexecuted_blocks=1 00:25:33.877 00:25:33.877 ' 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:33.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.877 --rc genhtml_branch_coverage=1 00:25:33.877 --rc genhtml_function_coverage=1 00:25:33.877 --rc genhtml_legend=1 00:25:33.877 --rc geninfo_all_blocks=1 00:25:33.877 --rc geninfo_unexecuted_blocks=1 00:25:33.877 00:25:33.877 ' 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:33.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.877 --rc genhtml_branch_coverage=1 00:25:33.877 --rc genhtml_function_coverage=1 00:25:33.877 --rc genhtml_legend=1 00:25:33.877 --rc geninfo_all_blocks=1 00:25:33.877 --rc geninfo_unexecuted_blocks=1 00:25:33.877 00:25:33.877 ' 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:25:33.877 05:36:05 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=87143 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 87143 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 87143 ']' 00:25:33.877 05:36:05 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:33.877 05:36:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:33.877 [2024-11-20 05:36:05.648490] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:33.878 [2024-11-20 05:36:05.649282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87143 ] 00:25:34.136 [2024-11-20 05:36:05.809850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:34.136 [2024-11-20 05:36:05.909972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.136 [2024-11-20 05:36:05.910053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.701 05:36:06 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:34.701 05:36:06 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:25:34.701 05:36:06 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:25:34.701 05:36:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:34.701 05:36:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.959 05:36:06 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:25:34.959 05:36:06 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:34.959 05:36:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.959 05:36:06 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:34.959 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:34.959 ' 00:25:36.332 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:25:36.332 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:25:36.332 05:36:08 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:25:36.332 05:36:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:36.332 05:36:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:36.332 05:36:08 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:25:36.332 05:36:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.332 05:36:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:36.332 05:36:08 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:25:36.332 ' 00:25:37.706 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:25:37.706 05:36:09 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:25:37.706 05:36:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.706 05:36:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:37.706 05:36:09 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:25:37.706 05:36:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.706 05:36:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:37.706 05:36:09 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:25:37.706 05:36:09 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:25:37.964 05:36:09 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:25:38.222 05:36:09 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:25:38.222 05:36:09 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:25:38.222 05:36:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:38.222 05:36:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:38.222 05:36:09 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:25:38.222 05:36:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.222 05:36:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:38.222 05:36:09 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:25:38.222 ' 00:25:39.186 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:25:39.186 05:36:10 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:25:39.186 05:36:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:39.186 05:36:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.186 05:36:10 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:25:39.186 05:36:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:39.186 05:36:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.186 05:36:10 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:25:39.186 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:25:39.186 ' 00:25:40.557 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:25:40.557 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:25:40.557 05:36:12 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:25:40.557 05:36:12 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:40.557 05:36:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:40.557 05:36:12 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 87143 00:25:40.558 05:36:12 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 87143 ']' 00:25:40.558 05:36:12 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 87143 00:25:40.558 05:36:12 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:25:40.558 05:36:12 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:40.558 05:36:12 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87143 00:25:40.815 05:36:12 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:40.815 killing process with pid 87143 00:25:40.815 05:36:12 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:40.815 05:36:12 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87143' 00:25:40.815 05:36:12 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 87143 00:25:40.815 05:36:12 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 87143 00:25:42.195 05:36:13 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:25:42.195 05:36:13 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 87143 ']' 00:25:42.195 05:36:13 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 87143 00:25:42.195 05:36:13 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 87143 ']' 00:25:42.195 05:36:13 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 87143 00:25:42.195 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (87143) - No such process 00:25:42.195 Process with pid 87143 is not found 00:25:42.195 05:36:13 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 87143 is not found' 00:25:42.195 05:36:13 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:25:42.195 05:36:13 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:42.195 05:36:13 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:42.195 05:36:13 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:42.195 ************************************ 00:25:42.195 END TEST spdkcli_raid 00:25:42.195 ************************************ 00:25:42.195 00:25:42.195 real 0m8.223s 00:25:42.195 user 0m17.082s 00:25:42.195 sys 0m0.846s 00:25:42.195 05:36:13 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:42.195 05:36:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:42.195 05:36:13 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:25:42.195 05:36:13 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:42.195 05:36:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:42.196 05:36:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 ************************************ 00:25:42.196 START TEST blockdev_raid5f 00:25:42.196 ************************************ 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:25:42.196 * Looking for test storage... 00:25:42.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.196 05:36:13 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:42.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.196 --rc genhtml_branch_coverage=1 00:25:42.196 --rc genhtml_function_coverage=1 00:25:42.196 --rc genhtml_legend=1 00:25:42.196 --rc geninfo_all_blocks=1 00:25:42.196 --rc geninfo_unexecuted_blocks=1 00:25:42.196 00:25:42.196 ' 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:42.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.196 --rc genhtml_branch_coverage=1 00:25:42.196 --rc genhtml_function_coverage=1 00:25:42.196 --rc genhtml_legend=1 00:25:42.196 --rc geninfo_all_blocks=1 00:25:42.196 --rc geninfo_unexecuted_blocks=1 00:25:42.196 00:25:42.196 ' 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:42.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.196 --rc genhtml_branch_coverage=1 00:25:42.196 --rc genhtml_function_coverage=1 00:25:42.196 --rc genhtml_legend=1 00:25:42.196 --rc geninfo_all_blocks=1 00:25:42.196 --rc geninfo_unexecuted_blocks=1 00:25:42.196 00:25:42.196 ' 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:42.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.196 --rc genhtml_branch_coverage=1 00:25:42.196 --rc genhtml_function_coverage=1 00:25:42.196 --rc genhtml_legend=1 00:25:42.196 --rc geninfo_all_blocks=1 00:25:42.196 --rc geninfo_unexecuted_blocks=1 00:25:42.196 00:25:42.196 ' 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:25:42.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=87408 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 87408 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 87408 ']' 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.196 05:36:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 05:36:13 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:42.196 [2024-11-20 05:36:13.904308] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:42.196 [2024-11-20 05:36:13.904592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87408 ] 00:25:42.455 [2024-11-20 05:36:14.056633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.455 [2024-11-20 05:36:14.152900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.022 05:36:14 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:43.023 05:36:14 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:25:43.023 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:25:43.023 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:25:43.023 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:25:43.023 05:36:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.023 05:36:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.023 Malloc0 00:25:43.023 Malloc1 00:25:43.282 Malloc2 00:25:43.282 05:36:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.282 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:25:43.282 05:36:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.282 05:36:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.282 05:36:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.282 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:25:43.282 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:25:43.282 05:36:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.282 05:36:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.282 05:36:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.282 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:25:43.282 05:36:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.283 05:36:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.283 05:36:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.283 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:43.283 05:36:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.283 05:36:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.283 05:36:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.283 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:25:43.283 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:25:43.283 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:25:43.283 05:36:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.283 05:36:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.283 05:36:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.283 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:25:43.283 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:25:43.283 05:36:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a1a77339-6ca0-4192-9063-e9d9e973d0f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a1a77339-6ca0-4192-9063-e9d9e973d0f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a1a77339-6ca0-4192-9063-e9d9e973d0f3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "297ef7b0-fdbc-4136-b85a-c6f6ab7bf39d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e027825e-c8bd-4e5d-b01d-b8d4403ed4bd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "72777c2d-f4da-4e57-8d8a-f851ce64bbee",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:25:43.283 05:36:15 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:25:43.283 05:36:15 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:25:43.283 05:36:15 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:25:43.283 05:36:15 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 87408 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 87408 ']' 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 87408 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87408 00:25:43.283 killing process with pid 87408 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87408' 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 87408 00:25:43.283 05:36:15 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 87408 00:25:45.222 05:36:16 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:45.222 05:36:16 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:25:45.222 05:36:16 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:25:45.222 05:36:16 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:45.222 05:36:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:45.222 ************************************ 00:25:45.222 START TEST bdev_hello_world 00:25:45.222 ************************************ 00:25:45.222 05:36:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:25:45.222 [2024-11-20 05:36:16.837666] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:45.222 [2024-11-20 05:36:16.837786] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87459 ] 00:25:45.222 [2024-11-20 05:36:17.002579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.483 [2024-11-20 05:36:17.107764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.745 [2024-11-20 05:36:17.495282] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:45.745 [2024-11-20 05:36:17.495335] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:25:45.745 [2024-11-20 05:36:17.495352] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:45.745 [2024-11-20 05:36:17.495846] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:45.745 [2024-11-20 05:36:17.495972] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:45.745 [2024-11-20 05:36:17.495990] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:45.745 [2024-11-20 05:36:17.496044] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:45.745 00:25:45.745 [2024-11-20 05:36:17.496060] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:46.685 00:25:46.685 real 0m1.656s 00:25:46.685 user 0m1.347s 00:25:46.685 sys 0m0.186s 00:25:46.685 ************************************ 00:25:46.685 END TEST bdev_hello_world 00:25:46.685 ************************************ 00:25:46.685 05:36:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:46.685 05:36:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:46.685 05:36:18 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:25:46.685 05:36:18 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:46.685 05:36:18 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:46.685 05:36:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:46.685 ************************************ 00:25:46.686 START TEST bdev_bounds 00:25:46.686 ************************************ 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:25:46.686 Process bdevio pid: 87495 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=87495 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 87495' 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 87495 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 87495 ']' 00:25:46.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:46.686 05:36:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:46.947 [2024-11-20 05:36:18.537095] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:46.947 [2024-11-20 05:36:18.537230] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87495 ] 00:25:46.947 [2024-11-20 05:36:18.695603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:47.209 [2024-11-20 05:36:18.799942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.209 [2024-11-20 05:36:18.800466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.209 [2024-11-20 05:36:18.800489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.857 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:47.857 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:25:47.857 05:36:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:47.857 I/O targets: 00:25:47.857 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:25:47.857 00:25:47.857 00:25:47.857 CUnit - A unit testing framework for C - Version 2.1-3 00:25:47.857 http://cunit.sourceforge.net/ 00:25:47.857 00:25:47.857 00:25:47.857 Suite: bdevio tests on: raid5f 00:25:47.857 Test: blockdev write read block ...passed 00:25:47.857 Test: blockdev write zeroes read block ...passed 00:25:47.857 Test: blockdev write zeroes read no split ...passed 00:25:47.857 Test: blockdev write zeroes read split ...passed 00:25:47.857 Test: blockdev write zeroes read split partial ...passed 00:25:47.857 Test: blockdev reset ...passed 00:25:47.857 Test: blockdev write read 8 blocks ...passed 00:25:47.857 Test: blockdev write read size > 128k ...passed 00:25:47.857 Test: blockdev write read invalid size ...passed 00:25:47.857 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:47.857 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:47.857 Test: blockdev write read max offset ...passed 00:25:47.857 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:47.857 Test: blockdev writev readv 8 blocks ...passed 00:25:47.857 Test: blockdev writev readv 30 x 1block ...passed 00:25:47.857 Test: blockdev writev readv block ...passed 00:25:48.116 Test: blockdev writev readv size > 128k ...passed 00:25:48.116 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:48.116 Test: blockdev comparev and writev ...passed 00:25:48.116 Test: blockdev nvme passthru rw ...passed 00:25:48.116 Test: blockdev nvme passthru vendor specific ...passed 00:25:48.116 Test: blockdev nvme admin passthru ...passed 00:25:48.116 Test: blockdev copy ...passed 00:25:48.116 00:25:48.116 Run Summary: Type Total Ran Passed Failed Inactive 00:25:48.116 suites 1 1 n/a 0 0 00:25:48.116 tests 23 23 23 0 0 00:25:48.116 asserts 130 130 130 0 n/a 00:25:48.116 00:25:48.116 Elapsed time = 0.467 seconds 00:25:48.116 0 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 87495 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 87495 ']' 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 87495 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87495 00:25:48.116 killing process with pid 87495 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87495' 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 87495 00:25:48.116 05:36:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 87495 00:25:49.049 05:36:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:25:49.049 00:25:49.049 real 0m2.177s 00:25:49.049 user 0m5.439s 00:25:49.049 sys 0m0.289s 00:25:49.049 05:36:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:49.049 05:36:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:49.049 ************************************ 00:25:49.049 END TEST bdev_bounds 00:25:49.049 ************************************ 00:25:49.049 05:36:20 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:25:49.049 05:36:20 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:49.049 05:36:20 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:49.049 05:36:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:49.049 ************************************ 00:25:49.049 START TEST bdev_nbd 00:25:49.049 ************************************ 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=87549 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 87549 /var/tmp/spdk-nbd.sock 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 87549 ']' 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:49.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:49.049 05:36:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:49.050 05:36:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:49.050 05:36:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:49.050 05:36:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:49.050 [2024-11-20 05:36:20.760744] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:25:49.050 [2024-11-20 05:36:20.761007] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.350 [2024-11-20 05:36:20.911673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.350 [2024-11-20 05:36:21.012685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:49.939 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:50.197 1+0 records in 00:25:50.197 1+0 records out 00:25:50.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335617 s, 12.2 MB/s 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:50.197 05:36:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:50.455 { 00:25:50.455 "nbd_device": "/dev/nbd0", 00:25:50.455 "bdev_name": "raid5f" 00:25:50.455 } 00:25:50.455 ]' 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:50.455 { 00:25:50.455 "nbd_device": "/dev/nbd0", 00:25:50.455 "bdev_name": "raid5f" 00:25:50.455 } 00:25:50.455 ]' 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:50.455 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:50.713 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:50.970 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:25:50.970 /dev/nbd0 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:51.229 1+0 records in 00:25:51.229 1+0 records out 00:25:51.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287612 s, 14.2 MB/s 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:51.229 05:36:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:51.229 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:51.229 { 00:25:51.229 "nbd_device": "/dev/nbd0", 00:25:51.229 "bdev_name": "raid5f" 00:25:51.229 } 00:25:51.229 ]' 00:25:51.229 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:51.229 { 00:25:51.229 "nbd_device": "/dev/nbd0", 00:25:51.229 "bdev_name": "raid5f" 00:25:51.229 } 00:25:51.229 ]' 00:25:51.229 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:51.488 256+0 records in 00:25:51.488 256+0 records out 00:25:51.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00730915 s, 143 MB/s 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:51.488 256+0 records in 00:25:51.488 256+0 records out 00:25:51.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308839 s, 34.0 MB/s 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:51.488 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:51.746 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:51.746 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:51.746 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:51.747 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:51.747 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:51.747 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:51.747 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:51.747 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:51.747 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:51.747 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:51.747 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:51.747 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:52.004 malloc_lvol_verify 00:25:52.004 05:36:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:52.263 9747e04c-041f-4b46-810f-a7d69c21540d 00:25:52.263 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:52.521 027502b2-106a-4461-bfa1-b030b9fd071d 00:25:52.521 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:52.779 /dev/nbd0 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:25:52.779 mke2fs 1.47.0 (5-Feb-2023) 00:25:52.779 Discarding device blocks: 0/4096 done 00:25:52.779 Creating filesystem with 4096 1k blocks and 1024 inodes 00:25:52.779 00:25:52.779 Allocating group tables: 0/1 done 00:25:52.779 Writing inode tables: 0/1 done 00:25:52.779 Creating journal (1024 blocks): done 00:25:52.779 Writing superblocks and filesystem accounting information: 0/1 done 00:25:52.779 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:52.779 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:53.036 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 87549 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 87549 ']' 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 87549 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87549 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87549' 00:25:53.037 killing process with pid 87549 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 87549 00:25:53.037 05:36:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 87549 00:25:53.719 05:36:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:25:53.719 00:25:53.719 real 0m4.732s 00:25:53.719 user 0m6.915s 00:25:53.719 sys 0m0.953s 00:25:53.719 05:36:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:53.719 05:36:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:53.719 ************************************ 00:25:53.719 END TEST bdev_nbd 00:25:53.719 ************************************ 00:25:53.719 05:36:25 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:25:53.719 05:36:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:25:53.719 05:36:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:25:53.719 05:36:25 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:25:53.719 05:36:25 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:53.720 05:36:25 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:53.720 05:36:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:53.720 ************************************ 00:25:53.720 START TEST bdev_fio 00:25:53.720 ************************************ 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:25:53.720 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:53.720 05:36:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:25:53.980 ************************************ 00:25:53.980 START TEST bdev_fio_rw_verify 00:25:53.980 ************************************ 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:53.980 05:36:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:53.980 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:53.980 fio-3.35 00:25:53.980 Starting 1 thread 00:26:06.220 00:26:06.220 job_raid5f: (groupid=0, jobs=1): err= 0: pid=87741: Wed Nov 20 05:36:36 2024 00:26:06.220 read: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(448MiB/10001msec) 00:26:06.220 slat (nsec): min=17292, max=99400, avg=20833.16, stdev=2996.59 00:26:06.220 clat (usec): min=9, max=517, avg=141.77, stdev=52.11 00:26:06.220 lat (usec): min=27, max=566, avg=162.61, stdev=53.04 00:26:06.220 clat percentiles (usec): 00:26:06.220 | 50.000th=[ 139], 99.000th=[ 253], 99.900th=[ 289], 99.990th=[ 347], 00:26:06.220 | 99.999th=[ 482] 00:26:06.220 write: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(462MiB/9868msec); 0 zone resets 00:26:06.220 slat (usec): min=7, max=189, avg=17.80, stdev= 3.67 00:26:06.220 clat (usec): min=55, max=1014, avg=319.25, stdev=55.55 00:26:06.220 lat (usec): min=70, max=1044, avg=337.05, stdev=57.63 00:26:06.220 clat percentiles (usec): 00:26:06.220 | 50.000th=[ 314], 99.000th=[ 469], 99.900th=[ 594], 99.990th=[ 758], 00:26:06.220 | 99.999th=[ 988] 00:26:06.220 bw ( KiB/s): min=40200, max=55056, per=98.40%, avg=47181.89, stdev=5415.02, samples=19 00:26:06.220 iops : min=10050, max=13764, avg=11795.58, stdev=1353.76, samples=19 00:26:06.220 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=13.93%, 250=39.35% 00:26:06.220 lat (usec) : 500=46.49%, 750=0.22%, 1000=0.01% 00:26:06.220 lat (msec) : 2=0.01% 00:26:06.220 cpu : usr=99.13%, sys=0.19%, ctx=21, majf=0, minf=9460 00:26:06.220 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.220 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.220 issued rwts: total=114708,118287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.220 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.220 00:26:06.220 Run status group 0 (all jobs): 00:26:06.220 READ: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=448MiB (470MB), run=10001-10001msec 00:26:06.220 WRITE: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=462MiB (485MB), run=9868-9868msec 00:26:06.220 ----------------------------------------------------- 00:26:06.220 Suppressions used: 00:26:06.220 count bytes template 00:26:06.220 1 7 /usr/src/fio/parse.c 00:26:06.220 25 2400 /usr/src/fio/iolog.c 00:26:06.220 1 8 libtcmalloc_minimal.so 00:26:06.220 1 904 libcrypto.so 00:26:06.220 ----------------------------------------------------- 00:26:06.220 00:26:06.220 00:26:06.220 real 0m12.152s 00:26:06.220 user 0m12.853s 00:26:06.220 sys 0m0.608s 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:06.221 ************************************ 00:26:06.221 END TEST bdev_fio_rw_verify 00:26:06.221 ************************************ 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a1a77339-6ca0-4192-9063-e9d9e973d0f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a1a77339-6ca0-4192-9063-e9d9e973d0f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a1a77339-6ca0-4192-9063-e9d9e973d0f3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "297ef7b0-fdbc-4136-b85a-c6f6ab7bf39d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e027825e-c8bd-4e5d-b01d-b8d4403ed4bd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "72777c2d-f4da-4e57-8d8a-f851ce64bbee",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:26:06.221 /home/vagrant/spdk_repo/spdk 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:26:06.221 00:26:06.221 real 0m12.321s 00:26:06.221 user 0m12.923s 00:26:06.221 sys 0m0.693s 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:06.221 ************************************ 00:26:06.221 05:36:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:06.221 END TEST bdev_fio 00:26:06.221 ************************************ 00:26:06.221 05:36:37 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:06.221 05:36:37 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:06.221 05:36:37 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:26:06.221 05:36:37 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:06.221 05:36:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:06.221 ************************************ 00:26:06.221 START TEST bdev_verify 00:26:06.221 ************************************ 00:26:06.221 05:36:37 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:06.221 [2024-11-20 05:36:37.897116] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:06.221 [2024-11-20 05:36:37.897236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87899 ] 00:26:06.479 [2024-11-20 05:36:38.060048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:06.479 [2024-11-20 05:36:38.179817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.479 [2024-11-20 05:36:38.180005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.044 Running I/O for 5 seconds... 00:26:08.908 14000.00 IOPS, 54.69 MiB/s [2024-11-20T05:36:41.677Z] 15983.50 IOPS, 62.44 MiB/s [2024-11-20T05:36:42.612Z] 16795.33 IOPS, 65.61 MiB/s [2024-11-20T05:36:44.011Z] 18248.50 IOPS, 71.28 MiB/s [2024-11-20T05:36:44.011Z] 18484.40 IOPS, 72.20 MiB/s 00:26:12.176 Latency(us) 00:26:12.176 [2024-11-20T05:36:44.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.176 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:12.176 Verification LBA range: start 0x0 length 0x2000 00:26:12.176 raid5f : 5.02 9074.68 35.45 0.00 0.00 21159.90 109.49 64124.46 00:26:12.177 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:12.177 Verification LBA range: start 0x2000 length 0x2000 00:26:12.177 raid5f : 5.01 9412.26 36.77 0.00 0.00 20261.71 220.55 22685.54 00:26:12.177 [2024-11-20T05:36:44.012Z] =================================================================================================================== 00:26:12.177 [2024-11-20T05:36:44.012Z] Total : 18486.93 72.21 0.00 0.00 20703.10 109.49 64124.46 00:26:12.741 00:26:12.741 real 0m6.676s 00:26:12.741 user 0m12.411s 00:26:12.741 sys 0m0.221s 00:26:12.741 05:36:44 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:12.741 ************************************ 00:26:12.741 END TEST bdev_verify 00:26:12.741 ************************************ 00:26:12.741 05:36:44 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:12.741 05:36:44 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:12.741 05:36:44 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:26:12.741 05:36:44 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:12.741 05:36:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:12.741 ************************************ 00:26:12.741 START TEST bdev_verify_big_io 00:26:12.741 ************************************ 00:26:12.741 05:36:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:13.000 [2024-11-20 05:36:44.615051] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:13.000 [2024-11-20 05:36:44.615194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87992 ] 00:26:13.000 [2024-11-20 05:36:44.773562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.258 [2024-11-20 05:36:44.881212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.258 [2024-11-20 05:36:44.881340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.517 Running I/O for 5 seconds... 00:26:15.857 695.00 IOPS, 43.44 MiB/s [2024-11-20T05:36:48.625Z] 919.00 IOPS, 57.44 MiB/s [2024-11-20T05:36:49.558Z] 973.00 IOPS, 60.81 MiB/s [2024-11-20T05:36:50.491Z] 998.75 IOPS, 62.42 MiB/s [2024-11-20T05:36:50.749Z] 990.20 IOPS, 61.89 MiB/s 00:26:18.914 Latency(us) 00:26:18.914 [2024-11-20T05:36:50.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.914 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:18.914 Verification LBA range: start 0x0 length 0x200 00:26:18.914 raid5f : 5.14 469.48 29.34 0.00 0.00 6600552.82 203.22 312959.61 00:26:18.914 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:18.914 Verification LBA range: start 0x200 length 0x200 00:26:18.914 raid5f : 5.22 534.99 33.44 0.00 0.00 5823745.39 147.30 287148.50 00:26:18.914 [2024-11-20T05:36:50.749Z] =================================================================================================================== 00:26:18.914 [2024-11-20T05:36:50.749Z] Total : 1004.47 62.78 0.00 0.00 6183867.64 147.30 312959.61 00:26:19.848 00:26:19.848 real 0m6.879s 00:26:19.848 user 0m12.829s 00:26:19.848 sys 0m0.214s 00:26:19.848 05:36:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:19.848 ************************************ 00:26:19.848 END TEST bdev_verify_big_io 00:26:19.849 05:36:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:19.849 ************************************ 00:26:19.849 05:36:51 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:19.849 05:36:51 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:26:19.849 05:36:51 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:19.849 05:36:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:19.849 ************************************ 00:26:19.849 START TEST bdev_write_zeroes 00:26:19.849 ************************************ 00:26:19.849 05:36:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:19.849 [2024-11-20 05:36:51.533255] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:19.849 [2024-11-20 05:36:51.533401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88089 ] 00:26:20.108 [2024-11-20 05:36:51.698776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.108 [2024-11-20 05:36:51.798961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.366 Running I/O for 1 seconds... 00:26:21.740 23055.00 IOPS, 90.06 MiB/s 00:26:21.740 Latency(us) 00:26:21.740 [2024-11-20T05:36:53.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.740 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:21.740 raid5f : 1.01 23027.20 89.95 0.00 0.00 5539.24 1543.88 7561.85 00:26:21.740 [2024-11-20T05:36:53.575Z] =================================================================================================================== 00:26:21.740 [2024-11-20T05:36:53.575Z] Total : 23027.20 89.95 0.00 0.00 5539.24 1543.88 7561.85 00:26:22.305 00:26:22.305 real 0m2.603s 00:26:22.305 user 0m2.297s 00:26:22.305 sys 0m0.180s 00:26:22.305 05:36:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:22.305 05:36:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:22.305 ************************************ 00:26:22.305 END TEST bdev_write_zeroes 00:26:22.305 ************************************ 00:26:22.305 05:36:54 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:22.305 05:36:54 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:26:22.305 05:36:54 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:22.305 05:36:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:22.305 ************************************ 00:26:22.305 START TEST bdev_json_nonenclosed 00:26:22.305 ************************************ 00:26:22.305 05:36:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:22.639 [2024-11-20 05:36:54.172690] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:22.639 [2024-11-20 05:36:54.172823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88132 ] 00:26:22.639 [2024-11-20 05:36:54.324557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.639 [2024-11-20 05:36:54.410042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.639 [2024-11-20 05:36:54.410116] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:22.639 [2024-11-20 05:36:54.410135] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:22.639 [2024-11-20 05:36:54.410143] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:22.897 00:26:22.897 real 0m0.438s 00:26:22.897 user 0m0.254s 00:26:22.897 sys 0m0.080s 00:26:22.897 05:36:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:22.897 ************************************ 00:26:22.897 END TEST bdev_json_nonenclosed 00:26:22.897 ************************************ 00:26:22.897 05:36:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:22.897 05:36:54 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:22.897 05:36:54 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:26:22.897 05:36:54 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:22.897 05:36:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:22.897 ************************************ 00:26:22.897 START TEST bdev_json_nonarray 00:26:22.897 ************************************ 00:26:22.897 05:36:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:22.897 [2024-11-20 05:36:54.651214] Starting SPDK v25.01-pre git sha1 95f6a056e / DPDK 24.03.0 initialization... 00:26:22.897 [2024-11-20 05:36:54.651334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88158 ] 00:26:23.155 [2024-11-20 05:36:54.802497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.155 [2024-11-20 05:36:54.887683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.155 [2024-11-20 05:36:54.887755] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:23.155 [2024-11-20 05:36:54.887769] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:23.155 [2024-11-20 05:36:54.887781] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:23.413 00:26:23.413 real 0m0.450s 00:26:23.413 user 0m0.254s 00:26:23.413 sys 0m0.093s 00:26:23.413 05:36:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:23.413 05:36:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:23.413 ************************************ 00:26:23.413 END TEST bdev_json_nonarray 00:26:23.413 ************************************ 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:26:23.413 05:36:55 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:26:23.413 ************************************ 00:26:23.413 END TEST blockdev_raid5f 00:26:23.413 ************************************ 00:26:23.413 00:26:23.413 real 0m41.406s 00:26:23.413 user 0m57.832s 00:26:23.413 sys 0m3.636s 00:26:23.413 05:36:55 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:23.413 05:36:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:23.413 05:36:55 -- spdk/autotest.sh@194 -- # uname -s 00:26:23.413 05:36:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:26:23.413 05:36:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:26:23.413 05:36:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:26:23.413 05:36:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@256 -- # timing_exit lib 00:26:23.413 05:36:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:23.413 05:36:55 -- common/autotest_common.sh@10 -- # set +x 00:26:23.413 05:36:55 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:23.413 05:36:55 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:26:23.413 05:36:55 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:23.413 05:36:55 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:23.413 05:36:55 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:26:23.413 05:36:55 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:26:23.413 05:36:55 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:26:23.413 05:36:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:23.413 05:36:55 -- common/autotest_common.sh@10 -- # set +x 00:26:23.413 05:36:55 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:26:23.413 05:36:55 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:26:23.413 05:36:55 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:26:23.413 05:36:55 -- common/autotest_common.sh@10 -- # set +x 00:26:24.790 INFO: APP EXITING 00:26:24.790 INFO: killing all VMs 00:26:24.790 INFO: killing vhost app 00:26:24.790 INFO: EXIT DONE 00:26:25.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:25.047 Waiting for block devices as requested 00:26:25.047 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:25.047 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:25.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:25.660 Cleaning 00:26:25.660 Removing: /var/run/dpdk/spdk0/config 00:26:25.660 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:25.660 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:25.660 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:25.660 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:25.660 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:25.660 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:25.660 Removing: /dev/shm/spdk_tgt_trace.pid56210 00:26:25.660 Removing: /var/run/dpdk/spdk0 00:26:25.660 Removing: /var/run/dpdk/spdk_pid56003 00:26:25.660 Removing: /var/run/dpdk/spdk_pid56210 00:26:25.660 Removing: /var/run/dpdk/spdk_pid56428 00:26:25.660 Removing: /var/run/dpdk/spdk_pid56527 00:26:25.660 Removing: /var/run/dpdk/spdk_pid56566 00:26:25.660 Removing: /var/run/dpdk/spdk_pid56694 00:26:25.660 Removing: /var/run/dpdk/spdk_pid56712 00:26:25.660 Removing: /var/run/dpdk/spdk_pid56906 00:26:25.660 Removing: /var/run/dpdk/spdk_pid57004 00:26:25.660 Removing: /var/run/dpdk/spdk_pid57100 00:26:25.660 Removing: /var/run/dpdk/spdk_pid57217 00:26:25.660 Removing: /var/run/dpdk/spdk_pid57313 00:26:25.660 Removing: /var/run/dpdk/spdk_pid57348 00:26:25.660 Removing: /var/run/dpdk/spdk_pid57390 00:26:25.660 Removing: /var/run/dpdk/spdk_pid57455 00:26:25.660 Removing: /var/run/dpdk/spdk_pid57539 00:26:25.660 Removing: /var/run/dpdk/spdk_pid57975 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58034 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58091 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58107 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58209 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58225 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58322 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58338 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58391 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58409 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58462 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58480 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58653 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58684 00:26:25.660 Removing: /var/run/dpdk/spdk_pid58773 00:26:25.660 Removing: /var/run/dpdk/spdk_pid60007 00:26:25.660 Removing: /var/run/dpdk/spdk_pid60205 00:26:25.660 Removing: /var/run/dpdk/spdk_pid60339 00:26:25.660 Removing: /var/run/dpdk/spdk_pid60944 00:26:25.660 Removing: /var/run/dpdk/spdk_pid61139 00:26:25.660 Removing: /var/run/dpdk/spdk_pid61273 00:26:25.660 Removing: /var/run/dpdk/spdk_pid61878 00:26:25.660 Removing: /var/run/dpdk/spdk_pid62192 00:26:25.660 Removing: /var/run/dpdk/spdk_pid62326 00:26:25.660 Removing: /var/run/dpdk/spdk_pid63647 00:26:25.660 Removing: /var/run/dpdk/spdk_pid63889 00:26:25.660 Removing: /var/run/dpdk/spdk_pid64018 00:26:25.660 Removing: /var/run/dpdk/spdk_pid65342 00:26:25.660 Removing: /var/run/dpdk/spdk_pid65579 00:26:25.660 Removing: /var/run/dpdk/spdk_pid65719 00:26:25.660 Removing: /var/run/dpdk/spdk_pid67032 00:26:25.660 Removing: /var/run/dpdk/spdk_pid67456 00:26:25.660 Removing: /var/run/dpdk/spdk_pid67596 00:26:25.660 Removing: /var/run/dpdk/spdk_pid69004 00:26:25.660 Removing: /var/run/dpdk/spdk_pid69252 00:26:25.660 Removing: /var/run/dpdk/spdk_pid69392 00:26:25.660 Removing: /var/run/dpdk/spdk_pid70801 00:26:25.660 Removing: /var/run/dpdk/spdk_pid71049 00:26:25.660 Removing: /var/run/dpdk/spdk_pid71189 00:26:25.660 Removing: /var/run/dpdk/spdk_pid72598 00:26:25.660 Removing: /var/run/dpdk/spdk_pid73063 00:26:25.919 Removing: /var/run/dpdk/spdk_pid73192 00:26:25.919 Removing: /var/run/dpdk/spdk_pid73330 00:26:25.919 Removing: /var/run/dpdk/spdk_pid73726 00:26:25.919 Removing: /var/run/dpdk/spdk_pid74439 00:26:25.919 Removing: /var/run/dpdk/spdk_pid74817 00:26:25.919 Removing: /var/run/dpdk/spdk_pid75478 00:26:25.919 Removing: /var/run/dpdk/spdk_pid75914 00:26:25.919 Removing: /var/run/dpdk/spdk_pid76654 00:26:25.919 Removing: /var/run/dpdk/spdk_pid77052 00:26:25.919 Removing: /var/run/dpdk/spdk_pid78946 00:26:25.919 Removing: /var/run/dpdk/spdk_pid79368 00:26:25.919 Removing: /var/run/dpdk/spdk_pid79786 00:26:25.919 Removing: /var/run/dpdk/spdk_pid81770 00:26:25.919 Removing: /var/run/dpdk/spdk_pid82233 00:26:25.919 Removing: /var/run/dpdk/spdk_pid82733 00:26:25.919 Removing: /var/run/dpdk/spdk_pid83765 00:26:25.919 Removing: /var/run/dpdk/spdk_pid84077 00:26:25.919 Removing: /var/run/dpdk/spdk_pid84977 00:26:25.919 Removing: /var/run/dpdk/spdk_pid85283 00:26:25.919 Removing: /var/run/dpdk/spdk_pid86188 00:26:25.919 Removing: /var/run/dpdk/spdk_pid86494 00:26:25.919 Removing: /var/run/dpdk/spdk_pid87143 00:26:25.919 Removing: /var/run/dpdk/spdk_pid87408 00:26:25.919 Removing: /var/run/dpdk/spdk_pid87459 00:26:25.919 Removing: /var/run/dpdk/spdk_pid87495 00:26:25.919 Removing: /var/run/dpdk/spdk_pid87726 00:26:25.919 Removing: /var/run/dpdk/spdk_pid87899 00:26:25.919 Removing: /var/run/dpdk/spdk_pid87992 00:26:25.919 Removing: /var/run/dpdk/spdk_pid88089 00:26:25.919 Removing: /var/run/dpdk/spdk_pid88132 00:26:25.919 Removing: /var/run/dpdk/spdk_pid88158 00:26:25.919 Clean 00:26:25.919 05:36:57 -- common/autotest_common.sh@1451 -- # return 0 00:26:25.919 05:36:57 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:26:25.919 05:36:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.919 05:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:25.919 05:36:57 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:26:25.919 05:36:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.919 05:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:25.919 05:36:57 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:25.919 05:36:57 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:25.919 05:36:57 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:25.919 05:36:57 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:26:25.919 05:36:57 -- spdk/autotest.sh@394 -- # hostname 00:26:25.919 05:36:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:26.178 geninfo: WARNING: invalid characters removed from testname! 00:26:48.095 05:37:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:51.372 05:37:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:53.899 05:37:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:55.798 05:37:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:58.328 05:37:29 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:59.702 05:37:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:02.267 05:37:33 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:02.267 05:37:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:02.267 05:37:33 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:02.267 05:37:33 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:02.267 05:37:33 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:02.267 05:37:33 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:02.267 + [[ -n 5000 ]] 00:27:02.267 + sudo kill 5000 00:27:02.276 [Pipeline] } 00:27:02.292 [Pipeline] // timeout 00:27:02.298 [Pipeline] } 00:27:02.315 [Pipeline] // stage 00:27:02.321 [Pipeline] } 00:27:02.336 [Pipeline] // catchError 00:27:02.345 [Pipeline] stage 00:27:02.348 [Pipeline] { (Stop VM) 00:27:02.363 [Pipeline] sh 00:27:02.643 + vagrant halt 00:27:05.929 ==> default: Halting domain... 00:27:09.251 [Pipeline] sh 00:27:09.534 + vagrant destroy -f 00:27:12.832 ==> default: Removing domain... 00:27:12.846 [Pipeline] sh 00:27:13.126 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:27:13.134 [Pipeline] } 00:27:13.148 [Pipeline] // stage 00:27:13.154 [Pipeline] } 00:27:13.168 [Pipeline] // dir 00:27:13.173 [Pipeline] } 00:27:13.187 [Pipeline] // wrap 00:27:13.195 [Pipeline] } 00:27:13.208 [Pipeline] // catchError 00:27:13.217 [Pipeline] stage 00:27:13.219 [Pipeline] { (Epilogue) 00:27:13.233 [Pipeline] sh 00:27:13.559 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:20.181 [Pipeline] catchError 00:27:20.183 [Pipeline] { 00:27:20.195 [Pipeline] sh 00:27:20.480 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:20.480 Artifacts sizes are good 00:27:20.490 [Pipeline] } 00:27:20.504 [Pipeline] // catchError 00:27:20.515 [Pipeline] archiveArtifacts 00:27:20.522 Archiving artifacts 00:27:20.624 [Pipeline] cleanWs 00:27:20.636 [WS-CLEANUP] Deleting project workspace... 00:27:20.636 [WS-CLEANUP] Deferred wipeout is used... 00:27:20.643 [WS-CLEANUP] done 00:27:20.645 [Pipeline] } 00:27:20.659 [Pipeline] // stage 00:27:20.664 [Pipeline] } 00:27:20.679 [Pipeline] // node 00:27:20.685 [Pipeline] End of Pipeline 00:27:20.731 Finished: SUCCESS